Science.gov

Sample records for advanced numerical algorithms

  1. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    DOE PAGES

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less

  2. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    SciTech Connect

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges. In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.

  3. A Numerical Instability in an ADI Algorithm for Gyrokinetics

    SciTech Connect

    E.A. Belli; G.W. Hammett

    2004-12-17

    We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v{sub {parallel}} {partial_derivative}/{partial_derivative}z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms.

  4. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  5. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  6. Advanced incomplete factorization algorithms for Stiltijes matrices

    SciTech Connect

    Il`in, V.P.

    1996-12-31

    The modern numerical methods for solving the linear algebraic systems Au = f with high order sparse matrices A, which arise in grid approximations of multidimensional boundary value problems, are based mainly on accelerated iterative processes with easily invertible preconditioning matrices presented in the form of approximate (incomplete) factorization of the original matrix A. We consider some recent algorithmic approaches, theoretical foundations, experimental data and open questions for incomplete factorization of Stiltijes matrices which are {open_quotes}the best{close_quotes} ones in the sense that they have the most advanced results. Special attention is given to solving the elliptic differential equations with strongly variable coefficients, singular perturbated diffusion-convection and parabolic equations.

  7. Parallel processing of numerical transport algorithms

    SciTech Connect

    Wienke, B.R.; Hiromoto, R.E.

    1984-01-01

    The multigroup, discrete ordinates representation for the linear transport equation enjoys widespread computational use and popularity. Serial solution schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, we investigate the parallel structure and extension of a number of standard S/sub n/ approaches. Concurrent inner sweeps, coupled acceleration techniques, synchronized inner-outer loops, and chaotic iteration are described, and results of computations are contrasted. The multigroup representation and serial iteration methods are also detailed. The basic iterative S/sub n/ method lends itself to parallel tasking, portably affording an effective medium for performing transport calculations on future architectures. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. We find basic inner-outer and chaotic iteration strategies both easily support comparably high degrees of parallelism. Both accommodate parallel rebalance and diffusion acceleration and appear as robust and viable parallel techniques for S/sub n/ production work.

  8. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  9. Advances in Numerical Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1997-01-01

    Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.

  10. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  11. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  12. Advanced algorithm for orbit computation

    NASA Technical Reports Server (NTRS)

    Szenbehely, V.

    1983-01-01

    Computational and analytical techniques which simplify the solution of complex problems in orbit mechanics, Astrodynamics and Celestial Mechanics were developed. The major tool of the simplification is the substitution of transformations in place of numerical or analytical integrations. In this way the rather complicated equations of orbit mechanics might sometimes be reduced to linear equations representing harmonic oscillators with constant coefficients.

  13. Analytical and numerical methods; advanced computer concepts

    SciTech Connect

    Lax, P D

    1991-03-01

    This past year, two projects have been completed and a new is under way. First, in joint work with R. Kohn, we developed a numerical algorithm to study the blowup of solutions to equations with certain similarity transformations. In the second project, the adaptive mesh refinement code of Berger and Colella for shock hydrodynamic calculations has been parallelized and numerical studies using two different shared memory machines have been done. My current effort is towards the development of Cartesian mesh methods to solve pdes with complicated geometries. Most of the coming year will be spent on this project, which is joint work with Prof. Randy Leveque at the University of Washington in Seattle.

  14. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1982-01-01

    Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.

  15. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  16. Carbon export algorithm advancements in models

    NASA Astrophysics Data System (ADS)

    Çağlar Yumruktepe, Veli; Salihoğlu, Barış

    2015-04-01

    The rate at which anthropogenic CO2 is absorbed by the oceans remains a critical question under investigation by climate researchers. Construction of a complete carbon budget, requires better understanding of air-sea exchanges and the processes controlling the vertical and horizontal transport of carbon in the ocean, particularly the biological carbon pump. Improved parameterization of carbon sequestration within ecosystem models is vital to better understand and predict changes in the global carbon cycle. Due to the complexity of processes controlling particle aggregation, sinking and decomposition, existing ecosystem models necessarily parameterize carbon sequestration using simple algorithms. Development of improved algorithms describing carbon export and sequestration, suitable for inclusion in numerical models is an ongoing work. Existing unique algorithms used in the state-of-the art ecosystem models and new experimental results obtained from mesocosm experiments and open ocean observations have been inserted into a common 1D pelagic ecosystem model for testing purposes. The model was implemented to the timeseries stations in the North Atlantic (BATS, PAP and ESTOC) and were evaluated with datasets of carbon export. Targetted topics of algorithms were PFT functional types, grazing and vertical movement of zooplankton, and remineralization, aggregation and ballasting dynamics of organic matter. Ultimately it is intended to feed improved algorithms to the 3D modelling community, for inclusion in coupled numerical models.

  17. Numerical comparison of Kalman filter algorithms - Orbit determination case study

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Thornton, C. L.

    1977-01-01

    Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

  18. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1981-01-01

    Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.

  19. Brush seal numerical simulation: Concepts and advances

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-01-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  20. Brush seal numerical simulation: Concepts and advances

    NASA Astrophysics Data System (ADS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-07-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  1. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  2. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  3. Advances in fracture algorithm development in GRIM

    NASA Astrophysics Data System (ADS)

    Cullis, I.; Church, P.; Greenwood, P.; Huntington-Thresher, W.; Reynolds, M.

    2003-09-01

    The numerical treatment of fracture processes has long been a major challenge in any hydrocode, but has been particularly acute in Eulerian Hydrocodes. This is due to the difficulties in establishing a consistent process for treating failure and the post failure treatment, which is complicated by advection, mixed cell and interface issues, particularly post failure. This alone increase the complexity of incorporating and validating a failure model compared to a Lagrange hydrocode, where the numerical treatment is much simpler. This paper outlines recent significant progress in the incorporation of fracture models in GRIM and the advection of damage across cell boundaries within the mesh. This has allowed a much more robust treatment of fracture in an Eulerian frame of reference and has greatly expanded the scope of tractable dynamic fracture scenarios. The progress has been possible due to a careful integration of the fracture algorithm within the numerical integration scheme to maintain a consistent representation of the physics. The paper describes various applications, which demonstrate the robustness and efficiency of the scheme and highlight some of the future challenges.

  4. Advanced numerics for multi-dimensional fluid flow calculations

    SciTech Connect

    Vanka, S.P.

    1984-04-01

    In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.

  5. Advanced numerics for multi-dimensional fluid flow calculations

    NASA Technical Reports Server (NTRS)

    Vanka, S. P.

    1984-01-01

    In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.

  6. Multiresolution representation and numerical algorithms: A brief review

    NASA Technical Reports Server (NTRS)

    Harten, Amiram

    1994-01-01

    In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.

  7. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  8. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  9. Mathematical model and numerical algorithm for aerodynamical flow

    NASA Astrophysics Data System (ADS)

    Shaydurov, V.; Shchepanovskaya, G.; Yakubovich, M.

    2016-10-01

    In the paper, a mathematical model and a numerical algorithm are proposed for modeling an air flow. The proposed model is based on the time-dependent Navier-Stokes equations for viscous heat-conducting gas. The energy equation and the state equations are modified to account for two kinds of `internal' energy. The first one is the usual translational and rotational energy of molecules which defines the thermodynamical temperature and the pressure. The second one is the subgrid energy of small turbulent eddies. A numerical algorithm is proposed for solving the formulated initial-boundary value problem as a combination of the semi-Lagrangian approximation for Lagrange transport derivatives and the conforming finite element method for other terms. A numerical example illustrates these approaches.

  10. Numerical algorithms for the atomistic dopant profiling of semiconductor materials

    NASA Astrophysics Data System (ADS)

    Aghaei Anvigh, Samira

    In this dissertation, we investigate the possibility to use scanning microscopy such as scanning capacitance microscopy (SCM) and scanning spreading resistance microscopy (SSRM) for the "atomistic" dopant profiling of semiconductor materials. For this purpose, we first analyze the discrete effects of random dopant fluctuations (RDF) on SCM and SSRM measurements with nanoscale probes and show that RDF significantly affects the differential capacitance and spreading resistance of the SCM and SSRM measurements if the dimension of the probe is below 50 nm. Then, we develop a mathematical algorithm to compute the spatial coordinates of the ionized impurities in the depletion region using a set of scanning microscopy measurements. The proposed numerical algorithm is then applied to extract the (x, y, z) coordinates of ionized impurities in the depletion region in the case of a few semiconductor materials with different doping configuration. The numerical algorithm developed to solve the above inverse problem is based on the evaluation of doping sensitivity functions of the differential capacitance, which show how sensitive the differential capacitance is to doping variations at different locations. To develop the numerical algorithm we first express the doping sensitivity functions in terms of the Gâteaux derivative of the differential capacitance, use Riesz representation theorem, and then apply a gradient optimization approach to compute the locations of the dopants. The algorithm is verified numerically using 2-D simulations, in which the C-V curves are measured at 3 different locations on the surface of the semiconductor. Although the cases studied in this dissertation are much idealized and, in reality, the C-V measurements are subject to noise and other experimental errors, it is shown that if the differential capacitance is measured precisely, SCM measurements can be potentially used for the "atomistic" profiling of ionized impurities in doped semiconductors.

  11. Numerical approximation of a nonlinear delay-advance functional differential equation by a finite element method

    NASA Astrophysics Data System (ADS)

    Teodoro, M. F.

    2012-09-01

    We are particularly interested in the numerical solution of the functional differential equations with symmetric delay and advance. In this work, we consider a nonlinear forward-backward equation, the Fitz Hugh-Nagumo equation. It is presented a scheme which extends the algorithm introduced in [1]. A computational method using Newton's method, finite element method and method of steps is developped.

  12. An algorithm for the numerical solution of linear differential games

    SciTech Connect

    Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.

  13. Algorithms for the Fractional Calculus: A Selection of Numerical Methods

    NASA Technical Reports Server (NTRS)

    Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.

    2003-01-01

    Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.

  14. Advances in numerical and applied mathematics

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr. (Editor); Hussaini, M. Y. (Editor)

    1986-01-01

    This collection of papers covers some recent developments in numerical analysis and computational fluid dynamics. Some of these studies are of a fundamental nature. They address basic issues such as intermediate boundary conditions for approximate factorization schemes, existence and uniqueness of steady states for time dependent problems, and pitfalls of implicit time stepping. The other studies deal with modern numerical methods such as total variation diminishing schemes, higher order variants of vortex and particle methods, spectral multidomain techniques, and front tracking techniques. There is also a paper on adaptive grids. The fluid dynamics papers treat the classical problems of imcompressible flows in helically coiled pipes, vortex breakdown, and transonic flows.

  15. Advanced Numerical Model for Irradiated Concrete

    SciTech Connect

    Giorla, Alain B.

    2015-03-01

    In this report, we establish a numerical model for concrete exposed to irradiation to address these three critical points. The model accounts for creep in the cement paste and its coupling with damage, temperature and relative humidity. The shift in failure mode with the loading rate is also properly represented. The numerical model for creep has been validated and calibrated against different experiments in the literature [Wittmann, 1970, Le Roy, 1995]. Results from a simplified model are shown to showcase the ability of numerical homogenization to simulate irradiation effects in concrete. In future works, the complete model will be applied to the analysis of the irradiation experiments of Elleuch et al. [1972] and Kelly et al. [1969]. This requires a careful examination of the experimental environmental conditions as in both cases certain critical information are missing, including the relative humidity history. A sensitivity analysis will be conducted to provide lower and upper bounds of the concrete expansion under irradiation, and check if the scatter in the simulated results matches the one found in experiments. The numerical and experimental results will be compared in terms of expansion and loss of mechanical stiffness and strength. Both effects should be captured accordingly by the model to validate it. Once the model has been validated on these two experiments, it can be applied to simulate concrete from nuclear power plants. To do so, the materials used in these concrete must be as well characterized as possible. The main parameters required are the mechanical properties of each constituent in the concrete (aggregates, cement paste), namely the elastic modulus, the creep properties, the tensile and compressive strength, the thermal expansion coefficient, and the drying shrinkage. These can be either measured experimentally, estimated from the initial composition in the case of cement paste, or back-calculated from mechanical tests on concrete. If some

  16. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.

    2016-01-01

    Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

  17. Advancing-Front Algorithm For Delaunay Triangulation

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1993-01-01

    Efficient algorithm performs Delaunay triangulation to generate unstructured grids for use in computing two-dimensional flows. Once grid generated, one can optionally call upon additional subalgorithm that removes diagonal lines from quadrilateral cells nearly rectangular. Resulting approximately rectangular grid reduces cost per iteration of flow-computing algorithm.

  18. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  19. An advancing front Delaunay triangulation algorithm designed for robustness

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1993-01-01

    A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.

  20. An advancing front Delaunay triangulation algorithm designed for robustness

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1992-01-01

    A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.

  1. Advanced CHP Control Algorithms: Scope Specification

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  2. Advanced Imaging Algorithms for Radiation Imaging Systems

    SciTech Connect

    Marleau, Peter

    2015-10-01

    The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.

  3. Algorithm-Based Fault Tolerance for Numerical Subroutines

    NASA Technical Reports Server (NTRS)

    Tumon, Michael; Granat, Robert; Lou, John

    2007-01-01

    A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

  4. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  5. Recent advances in numerical analysis of structural eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1973-01-01

    A wide range of eigenvalue problems encountered in practical structural engineering analyses is defined, in which the structures are assumed to be discretized by any suitable technique such as the finite-element method. A review of the usual numerical procedures for the solution of such eigenvalue problems is presented and is followed by an extensive account of recently developed eigenproblem solution procedures. Particular emphasis is placed on the new numerical algorithms and associated computer programs based on the Sturm sequence method. Eigenvalue algorithms developed for efficient solution of natural frequency and buckling problems of structures are presented, as well as some eigenvalue procedures formulated in connection with the solution of quadratic matrix equations associated with free vibration analysis of structures. A new algorithm is described for natural frequency analysis of damped structural systems.

  6. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  7. Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs

    NASA Astrophysics Data System (ADS)

    Nikolić, Zoran; Nguyen, Ha Thai; Frantz, Gene

    2007-12-01

    Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs) to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.

  8. Preface to advances in numerical simulation of plasmas

    NASA Astrophysics Data System (ADS)

    Parker, Scott E.; Chacon, Luis

    2016-10-01

    This Journal of Computational Physics Special Issue, titled "Advances in Numerical Simulation of Plasmas," presents a snapshot of the international state of the art in the field of computational plasma physics. The articles herein are a subset of the topics presented as invited talks at the 24th International Conference on the Numerical Simulation of Plasmas (ICNSP), August 12-14, 2015 in Golden, Colorado. The choice of papers was highly selective. The ICNSP is held every other year and is the premier scientific meeting in the field of computational plasma physics.

  9. The association between symbolic and nonsymbolic numerical magnitude processing and mental versus algorithmic subtraction in adults.

    PubMed

    Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert

    2016-03-01

    There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge. PMID:26914586

  10. Deinterlacing algorithm with an advanced non-local means filter

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Jeon, Gwanggil; Jeong, Jechang

    2012-04-01

    The authors introduce an efficient intra-field deinterlacing algorithm using an advanced non-local means filter. The non-local means (NLM) method has received considerable attention due to its high performance and simplicity. The NLM method adaptively obtains the missing pixel by the weighted average of the gray values of all pixels within the image, and then automatically eliminates unrelated neighborhoods from the weighted average. However, spatial location distance is another important issue for the deinterlacing method. Therefore we introduce an advanced NLM (ANLM) filter while consider neighborhood similarity and patch distance. Moreover, the search region of the conventional NLM is the whole image, while, the ANLM can just utilize the limited search region and achieve good performance and high efficiency. When compared with existing deinterlacing algorithms, the proposed algorithm improves the peak signal-to-noise-ratio while maintaining high efficiency.

  11. Some recent advances in the numerical solution of differential equations

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Raffaele

    2016-06-01

    The purpose of the talk is the presentation of some recent advances in the numerical solution of differential equations, with special emphasis to reaction-diffusion problems, Hamiltonian problems and ordinary differential equations with discontinuous right-hand side. As a special case, in this short paper we focus on the solution of reaction-diffusion problems by means of special purpose numerical methods particularly adapted to the problem: indeed, following a problem oriented approach, we propose a modified method of lines based on the employ of finite differences shaped on the qualitative behavior of the solutions. Constructive issues and a brief analysis are presented, together with some numerical experiments showing the effectiveness of the approach and a comparison with existing solvers.

  12. Numerical algorithm for solving mathematical programming problems with a smooth surface as a constraint

    NASA Astrophysics Data System (ADS)

    Chernyaev, Yu. A.

    2016-03-01

    A numerical algorithm for minimizing a convex function on a smooth surface is proposed. The algorithm is based on reducing the original problem to a sequence of convex programming problems. Necessary extremum conditions are examined, and the convergence of the algorithm is analyzed.

  13. Topics in Randomized Algorithms for Numerical Linear Algebra

    NASA Astrophysics Data System (ADS)

    Holodnak, John T.

    In this dissertation, we present results for three topics in randomized algorithms. Each topic is related to random sampling. We begin by studying a randomized algorithm for matrix multiplication that randomly samples outer products. We show that if a set of deterministic conditions is satisfied, then the algorithm can compute the exact product. In addition, we show probabilistic bounds on the two norm relative error of the algorithm. two norm relative error of the algorithm. In the second part, we discuss the sensitivity of leverage scores to perturbations. Leverage scores are scalar quantities that give a notion of importance to the rows of a matrix. They are used as sampling probabilities in many randomized algorithms. We show bounds on the difference between the leverage scores of a matrix and a perturbation of the matrix. In the last part, we approximate functions over an active subspace of parameters. To identify the active subspace, we apply an algorithm that relies on a random sampling scheme. We show bounds on the accuracy of the active subspace identification algorithm and construct an approximation to a function with 3556 parameters using a ten-dimensional active subspace.

  14. Analysis of the numerical effects of parallelism on a parallel genetic algorithm

    SciTech Connect

    Hart, W.E.; Belew, R.K.; Kohn, S.; Baden, S.

    1995-09-18

    This paper examines the effects of relaxed synchronization on both the numerical and parallel efficiency of parallel genetic algorithms (GAs). We describe a coarse-grain geographically structured parallel genetic algorithm. Our experiments show that asynchronous versions of these algorithms have a lower run time than-synchronous GAs. Furthermore, we demonstrate that this improvement in performance is partly due to the fact that the numerical efficiency of the asynchronous genetic algorithm is better than the synchronous genetic algorithm. Our analysis includes a critique of the utility of traditional parallel performance measures for parallel GAs, and we evaluate the claims made by several researchers that parallel GAs can have superlinear speedup.

  15. International Symposium on Computational Electronics—Physical Modeling, Mathematical Theory, and Numerical Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yiming

    2007-12-01

    This symposium is an open forum for discussion on the current trends and future directions of physical modeling, mathematical theory, and numerical algorithm in electrical and electronic engineering. The goal is for computational scientists and engineers, computer scientists, applied mathematicians, physicists, and researchers to present their recent advances and exchange experience. We welcome contributions from researchers of academia and industry. All papers to be presented in this symposium have carefully been reviewed and selected. They include semiconductor devices, circuit theory, statistical signal processing, design optimization, network design, intelligent transportation system, and wireless communication. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!

  16. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  17. Numerical modeling of spray combustion with an advanced VOF method

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  18. Numerical analysis of the V-Y shaped advancement flap.

    PubMed

    Remache, D; Chambert, J; Pauchot, J; Jacquet, E

    2015-10-01

    The V-Y advancement flap is a usual technique for the closure of skin defects. A triangular flap is incised adjacent to a skin defect of rectangular shape. As the flap is advanced to close the initial defect, two smaller defects in the shape of a parallelogram are formed with respect to a reflection symmetry. The height of the defects depends on the apex angle of the flap and the closure efforts are related to the defects height. Andrades et al. 2005 have performed a geometrical analysis of the V-Y flap technique in order to reach a compromise between the flap size and the defects width. However, the geometrical approach does not consider the mechanical properties of the skin. The present analysis based on the finite element method is proposed as a complement to the geometrical one. This analysis aims to highlight the major role of the skin elasticity for a full analysis of the V-Y advancement flap. Furthermore, the study of this technique shows that closing at the flap apex seems mechanically the most interesting step. Thus different strategies of defect closure at the flap apex stemming from surgeon's know-how have been tested by numerical simulations. PMID:26342442

  19. Numerical Optimization Algorithms and Software for Systems Biology

    SciTech Connect

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  20. Advanced Health Management Algorithms for Crew Exploration Applications

    NASA Technical Reports Server (NTRS)

    Davidson, Matt; Stephens, John; Jones, Judit

    2005-01-01

    Achieving the goals of the President's Vision for Exploration will require new and innovative ways to achieve reliability increases of key systems and sub-systems. The most prominent approach used in current systems is to maintain hardware redundancy. This imposes constraints to the system and utilizes weight that could be used for payload for extended lunar, Martian, or other deep space missions. A technique to improve reliability while reducing the system weight and constraints is through the use of an Advanced Health Management System (AHMS). This system contains diagnostic algorithms and decision logic to mitigate or minimize the impact of system anomalies on propulsion system performance throughout the powered flight regime. The purposes of the AHMS are to increase the probability of successfully placing the vehicle into the intended orbit (Earth, Lunar, or Martian escape trajectory), increase the probability of being able to safely execute an abort after it has developed anomalous performance during launch or ascent phases of the mission, and to minimize or mitigate anomalies during the cruise portion of the mission. This is accomplished by improving the knowledge of the state of the propulsion system operation at any given turbomachinery vibration protection logic and an overall system analysis algorithm that utilizes an underlying physical model and a wide array of engine system operational parameters to detect and mitigate predefined engine anomalies. These algorithms are generic enough to be utilized on any propulsion system yet can be easily tailored to each application by changing input data and engine specific parameters. The key to the advancement of such a system is the verification of the algorithms. These algorithms will be validated through the use of a database of nominal and anomalous performance from a large propulsion system where data exists for catastrophic and noncatastrophic propulsion sytem failures.

  1. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  2. Numerical comparison of discrete Kalman filter algorithms - Orbit determination case study

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Thornton, C. L.

    1976-01-01

    Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

  3. Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving

    2014-02-01

    The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.

  4. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  5. Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.

    ERIC Educational Resources Information Center

    Jacquot, Raymond G.; And Others

    1985-01-01

    Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)

  6. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1987-01-01

    This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also.

  7. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Voigt, Robert G.; Romine, Charles H.

    1988-01-01

    This is a bibliography on numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are also listed.

  8. Characterization of an advanced performance reticle defect inspection algorithm

    NASA Astrophysics Data System (ADS)

    Wiley, James N.; Aquino, Christopher M.; Burnham, Douglas V.; Vacca, Anthony

    1997-07-01

    KLA-Tencor has developed a new reticle defect inspection algorithm, APA (advanced performance algorithm), which replaces the current 300 series algorithm (P183), and offers significant improvements in the detection of important defect types on advanced reticles, including OPC reticles. The improvements with APA can also allow inspections with larger pixel sizes compared with P183, resulting in faster inspection times. A suite of test masks was used for evaluating APA's performance on a 351 reticle inspection system at 488 nm using 0.5, 0.375 and 0.25 micrometer pixel sizes. The test suite included a VeriThoroTM 890EX test reticle and a SEMI- standard programmed defect test pattern scaled by 50%, 33%, 25%, and 20% (producing nominal primary features sizes of 1.5, 1.0, 0.75 and 0.60 micrometer). APA's improved performance allowed the use of the 0.375 micrometer pixel size for the 1.5 and 1.0 micrometer linewidth patterns resulting in faster inspection times (compared with the 0.25 micrometer pixel size for P183); it further allowed the successful inspection of the 0.75 and 0.60 micrometer linewidth patterns. A methodology was developed to analyze, summarize and compare the performance results of APA and P183. Finally, APA successfully inspected various actual product reticles with patterns of 0.75 micrometer and below including an advanced MicroUnity OPC (optical proximity correction) reticle with 0.75 micrometer serif and 0.35 micrometer database neck dimensions.

  9. Studies of numerical algorithms for gyrokinetics and the effects of shaping on plasma turbulence

    NASA Astrophysics Data System (ADS)

    Belli, Emily Ann

    Advanced numerical algorithms for gyrokinetic simulations are explored for more effective studies of plasma turbulent transport. The gyrokinetic equations describe the dynamics of particles in 5-dimensional phase space, averaging over the fast gyromotion, and provide a foundation for studying plasma microturbulence in fusion devices and in astrophysical plasmas. Several algorithms for Eulerian/continuum gyrokinetic solvers are compared. An iterative implicit scheme based on numerical approximations of the plasma response is developed. This method reduces the long time needed to set-up implicit arrays, yet still has larger time step advantages similar to a fully implicit method. Various model preconditioners and iteration schemes, including Krylov-based solvers, are explored. An Alternating Direction Implicit algorithm is also studied and is surprisingly found to yield a severe stability restriction on the time step. Overall, an iterative Krylov algorithm might be the best approach for extensions of core tokamak gyrokinetic simulations to edge kinetic formulations and may be particularly useful for studies of large-scale ExB shear effects. The effects of flux surface shape on the gyrokinetic stability and transport of tokamak plasmas are studied using the nonlinear GS2 gyrokinetic code with analytic equilibria based on interpolations of representative JET-like shapes. High shaping is found to be a stabilizing influence on both the linear ITG instability and nonlinear ITG turbulence. A scaling of the heat flux with elongation of chi ˜ kappa-1.5 or kappa-2 (depending on the triangularity) is observed, which is consistent with previous gyrofluid simulations. Thus, the GS2 turbulence simulations are explaining a significant fraction, but not all, of the empirical elongation scaling. The remainder of the scaling may come from (1) the edge boundary conditions for core turbulence, and (2) the larger Dimits nonlinear critical temperature gradient shift due to the

  10. A Numerical Algorithm for Finding Solution of Cross-Coupled Algebraic Riccati Equations

    NASA Astrophysics Data System (ADS)

    Mukaidani, Hiroaki; Yamamoto, Seiji; Yamamoto, Toru

    In this letter, a computational approach for solving cross-coupled algebraic Riccati equations (CAREs) is investigated. The main purpose of this letter is to propose a new algorithm that combines Newton's method with a gradient-based iterative (GI) algorithm for solving CAREs. In particular, it is noteworthy that both a quadratic convergence under an appropriate initial condition and reduction in dimensions for matrix computation are both achieved. A numerical example is provided to demonstrate the efficiency of this proposed algorithm.

  11. VIRTEX-5 Fpga Implementation of Advanced Encryption Standard Algorithm

    NASA Astrophysics Data System (ADS)

    Rais, Muhammad H.; Qasim, Syed M.

    2010-06-01

    In this paper, we present an implementation of Advanced Encryption Standard (AES) cryptographic algorithm using state-of-the-art Virtex-5 Field Programmable Gate Array (FPGA). The design is coded in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Timing simulation is performed to verify the functionality of the designed circuit. Performance evaluation is also done in terms of throughput and area. The design implemented on Virtex-5 (XC5VLX50FFG676-3) FPGA achieves a maximum throughput of 4.34 Gbps utilizing a total of 399 slices.

  12. Fourier analysis of numerical algorithms for the Maxwell equations

    NASA Technical Reports Server (NTRS)

    Liu, Yen

    1993-01-01

    The Fourier method is used to analyze the dispersive, dissipative, and isotropy errors of various spatial and time discretizations applied to the Maxwell equations on multi-dimensional grids. Both Cartesian grids and non-Cartesian grids based on hexagons and tetradecahedra are studied and compared. The numerical errors are quantitatively determined in terms of phase speed, wave number, propagation direction, gridspacings, and CFL number. The study shows that centered schemes are more efficient than upwind schemes. The non-Cartesian grids yield superior isotropy and higher accuracy than the Cartesian ones. For the centered schemes, the staggered grids produce less errors than the unstaggered ones. A new unstaggered scheme which has all the best properties is introduced. The study also demonstrates that a proper choice of time discretization can reduce the overall numerical errors due to the spatial discretization.

  13. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  14. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2001-12-14

    Recent progress in simulation methodologies and new, high-performance parallel architectures have made it is possible to perform detailed simulations of multidimensional combustion phenomena using comprehensive kinetics mechanisms. However, as simulation complexity increases, it becomes increasingly difficult to extract detailed quantitative information about the flame from the numerical solution, particularly regarding the details of chemical processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of combustion phenomena. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian viewpoint in which we follow the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system. From this perspective an ''atom'' is part of some molecule that is transported through the domain by advection and diffusion. Reactions ca use the atom to shift from one species to another with the subsequent transport given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion as a suitable random-walk process. Within this probabilistic framework, reactions can be viewed as a Markov process transforming molecule to molecule with given probabilities. In this paper, we discuss the numerical issues in more detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. We also illustrate how the method can be applied to studying the role of cyanochemistry on NOx production in a diffusion flame.

  15. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2004-04-26

    Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.

  16. Thrombosis modeling in intracranial aneurysms: a lattice Boltzmann numerical algorithm

    NASA Astrophysics Data System (ADS)

    Ouared, R.; Chopard, B.; Stahl, B.; Rüfenacht, D. A.; Yilmaz, H.; Courbebaisse, G.

    2008-07-01

    The lattice Boltzmann numerical method is applied to model blood flow (plasma and platelets) and clotting in intracranial aneurysms at a mesoscopic level. The dynamics of blood clotting (thrombosis) is governed by mechanical variations of shear stress near wall that influence platelets-wall interactions. Thrombosis starts and grows below a shear rate threshold, and stops above it. Within this assumption, it is possible to account qualitatively well for partial, full or no occlusion of the aneurysm, and to explain why spontaneous thrombosis is more likely to occur in giant aneurysms than in small or medium sized aneurysms.

  17. Thermal contact algorithms in SIERRA mechanics : mathematical background, numerical verification, and evaluation of performance.

    SciTech Connect

    Copps, Kevin D.; Carnes, Brian R.

    2008-04-01

    We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.

  18. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    SciTech Connect

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  19. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGES

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  20. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  1. Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning

    NASA Astrophysics Data System (ADS)

    Bradley, Ben K.

    Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and

  2. A stable and efficient numerical algorithm for unconfined aquifer analysis

    SciTech Connect

    Keating, Elizabeth; Zyvoloski, George

    2008-01-01

    The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.

  3. Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms

    NASA Astrophysics Data System (ADS)

    Brunner, Christopher W.; Lu, Ping

    2012-09-01

    The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.

  4. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of

  5. Advanced three-dimensional Eulerian hydrodynamic algorithm development

    SciTech Connect

    Rider, W.J.; Kothe, D.B.; Mosso, S.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project is to investigate, implement, and evaluate algorithms that have high potential for improving the robustness, fidelity and accuracy of three-dimensional Eulerian hydrodynamic simulations. Eulerian computations are necessary to simulate a number of important physical phenomena ranging from the molding process for metal parts to nuclear weapons safety issues to astrophysical phenomena such as that associated with a Type 2 supernovae. A number of algorithmic issues were explored in the course of this research including interface/volume tracking, surface physics integration, high resolution integration techniques, multilevel iterative methods, multimaterial hydrodynamics and coupling radiation with hydrodynamics. This project combines core strengths of several Laboratory divisions. The project has high institutional benefit given the renewed emphasis on numerical simulations in Science-Based Stockpile Stewardship and the Accelerated Strategic Computing Initiative and LANL`s tactical goals related to high performance computing and simulation.

  6. Numerical Modeling and Inverse Scattering in Nondestructive Testing: Recent Applications and Advances

    NASA Astrophysics Data System (ADS)

    Marklein, R.; Langenberg, K. J.; Mayer, K.; Shlivinski, A.; Miao, J.; Zimmer, A.; Müller, W.; Schmitz, V.; Kohl, C.; Mletzko, U.

    2005-04-01

    This paper presents recent advances and future challenges of the application of different numerical modeling tools and linear and nonlinear inversion algorithms in ultrasonics and electromagnetics applied in NDE. The inversion methods considered in the presented work vary from linear schemes, e.g. SAFT/InASAFT and Diffraction Tomography/FT-SAFT, to nonlinear schemes, e.g. the Contrast Source Inversion. Inversion results are presented and compared for modeled and measured ultrasonic and electromagnetic data to locate voids and cracks as well as to locate aluminum tendon ducts in concrete, which is a typical GPR problem. Finite Integration Technique (FIT) and Domain Integral Equation (DIE) solvers are used as modeling tools.

  7. Advanced illumination control algorithm for medical endoscopy applications

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.

    2015-05-01

    CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.

  8. Numerical Algorithms for Acoustic Integrals - The Devil is in the Details

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1996-01-01

    The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.

  9. A numerical solution algorithm and its application to studies of pulsed light fields propagation

    NASA Astrophysics Data System (ADS)

    Banakh, V. A.; Gerasimova, L. O.; Smalikho, I. N.; Falits, A. V.

    2016-08-01

    A new method for studies of pulsed laser beams propagation in a turbulent atmosphere was proposed. The algorithm of numerical simulation is based on the solution of wave parabolic equation for complex spectral amplitude of wave field using method of splitting into physical factors. Examples of the use of the algorithm in the case the propagation pulsed Laguerre-Gaussian beams of femtosecond duration in the turbulence atmosphere has been shown.

  10. Efficient algorithms for numerical simulation of the motion of earth satellites

    NASA Astrophysics Data System (ADS)

    Bordovitsyna, T. V.; Bykova, L. E.; Kardash, A. V.; Fedyaev, Yu. A.; Sharkovskii, N. A.

    1992-08-01

    We briefly present our results obtained during the development and an investigation of the efficacy of algorithms for numerical prediction of the motion of earth satellites (ESs) using computers of different power. High accuracy and efficiency in predicting ES motion are achieved by using higher-order numerical methods, transformations that regularize and stabilize the equations of motion, and a high-precision model of the forces acting on an ES. This approach enables us to construct efficient algorithms of the required accuracy, both for universal computers with a large RAM and for personal computers with very limited capacity.

  11. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  12. Seven-spot ladybird optimization: a novel and efficient metaheuristic algorithm for numerical optimization.

    PubMed

    Wang, Peng; Zhu, Zhouquan; Huang, Shuai

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879

  13. Seven-spot ladybird optimization: a novel and efficient metaheuristic algorithm for numerical optimization.

    PubMed

    Wang, Peng; Zhu, Zhouquan; Huang, Shuai

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.

  14. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  15. Advances in Satellite Microwave Precipitation Retrieval Algorithms Over Land

    NASA Astrophysics Data System (ADS)

    Wang, N. Y.; You, Y.; Ferraro, R. R.

    2015-12-01

    Precipitation plays a key role in the earth's climate system, particularly in the aspect of its water and energy balance. Satellite microwave (MW) observations of precipitation provide a viable mean to achieve global measurement of precipitation with sufficient sampling density and accuracy. However, accurate precipitation information over land from satellite MW is a challenging problem. The Goddard Profiling Algorithm (GPROF) algorithm for the Global Precipitation Measurement (GPM) is built around the Bayesian formulation (Evans et al., 1995; Kummerow et al., 1996). GPROF uses the likelihood function and the prior probability distribution function to calculate the expected value of precipitation rate, given the observed brightness temperatures. It is particularly convenient to draw samples from a prior PDF from a predefined database of observations or models. GPROF algorithm does not search all database entries but only the subset thought to correspond to the actual observation. The GPM GPROF V1 database focuses on stratification by surface emissivity class, land surface temperature and total precipitable water. However, there is much uncertainty as to what is the optimal information needed to subset the database for different conditions. To this end, we conduct a database stratification study of using National Mosaic and Multi-Sensor Quantitative Precipitation Estimation, Special Sensor Microwave Imager/Sounder (SSMIS) and Advanced Technology Microwave Sounder (ATMS) and reanalysis data from Modern-Era Retrospective Analysis for Research and Applications (MERRA). Our database study (You et al., 2015) shows that environmental factors such as surface elevation, relative humidity, and storm vertical structure and height, and ice thickness can help in stratifying a single large database to smaller and more homogeneous subsets, in which the surface condition and precipitation vertical profiles are similar. It is found that the probability of detection (POD) increases

  16. An adaptive numeric predictor-corrector guidance algorithm for atmospheric entry vehicles

    NASA Astrophysics Data System (ADS)

    Spratlin, Kenneth Milton

    1987-05-01

    An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

  17. Recent advances in two-phase flow numerics

    SciTech Connect

    Mahaffy, J.H.; Macian, R.

    1997-07-01

    The authors review three topics in the broad field of numerical methods that may be of interest to individuals modeling two-phase flow in nuclear power plants. The first topic is iterative solution of linear equations created during the solution of finite volume equations. The second is numerical tracking of macroscopic liquid interfaces. The final area surveyed is the use of higher spatial difference techniques.

  18. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  19. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  20. Advanced Fuel Cycle Economic Tools, Algorithms, and Methodologies

    SciTech Connect

    David E. Shropshire

    2009-05-01

    The Advanced Fuel Cycle Initiative (AFCI) Systems Analysis supports engineering economic analyses and trade-studies, and requires a requisite reference cost basis to support adequate analysis rigor. In this regard, the AFCI program has created a reference set of economic documentation. The documentation consists of the “Advanced Fuel Cycle (AFC) Cost Basis” report (Shropshire, et al. 2007), “AFCI Economic Analysis” report, and the “AFCI Economic Tools, Algorithms, and Methodologies Report.” Together, these documents provide the reference cost basis, cost modeling basis, and methodologies needed to support AFCI economic analysis. The application of the reference cost data in the cost and econometric systems analysis models will be supported by this report. These methodologies include: the energy/environment/economic evaluation of nuclear technology penetration in the energy market—domestic and internationally—and impacts on AFCI facility deployment, uranium resource modeling to inform the front-end fuel cycle costs, facility first-of-a-kind to nth-of-a-kind learning with application to deployment of AFCI facilities, cost tradeoffs to meet nuclear non-proliferation requirements, and international nuclear facility supply/demand analysis. The economic analysis will be performed using two cost models. VISION.ECON will be used to evaluate and compare costs under dynamic conditions, consistent with the cases and analysis performed by the AFCI Systems Analysis team. Generation IV Excel Calculations of Nuclear Systems (G4-ECONS) will provide static (snapshot-in-time) cost analysis and will provide a check on the dynamic results. In future analysis, additional AFCI measures may be developed to show the value of AFCI in closing the fuel cycle. Comparisons can show AFCI in terms of reduced global proliferation (e.g., reduction in enrichment), greater sustainability through preservation of a natural resource (e.g., reduction in uranium ore depletion), value from

  1. Dynamics analysis of electrodynamic satellite tethers. Equations of motion and numerical solution algorithms for the tether

    NASA Technical Reports Server (NTRS)

    Nacozy, P. E.

    1984-01-01

    The equations of motion are developed for a perfectly flexible, inelastic tether with a satellite at its extremity. The tether is attached to a space vehicle in orbit. The tether is allowed to possess electrical conductivity. A numerical solution algorithm to provide the motion of the tether and satellite system is presented. The resulting differential equations can be solved by various existing standard numerical integration computer programs. The resulting differential equations allow the introduction of approximations that can lead to analytical, approximate general solutions. The differential equations allow more dynamical insight of the motion.

  2. An advancing-front Delaunay-triangulation algorithm designed for robustness

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1993-01-01

    The following topics, which are associated with computational fluid dynamics, are discussed: unstructured mesh generation; the advancing front methodology; failures of the advancing front methodology; Delaunay triangulation; the Tanamua-Merriam algorithm; Yet Another Grid Generator (YAGG); and advancing front-Delaunay triangulation. The discussion is presented in viewgraph form.

  3. Advanced optimization of permanent magnet wigglers using a genetic algorithm

    SciTech Connect

    Hajima, Ryoichi

    1995-12-31

    In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.

  4. Analysis of V-cycle multigrid algorithms for forms defined by numerical quadrature

    SciTech Connect

    Bramble, J.H. . Dept. of Mathematics); Goldstein, C.I.; Pasciak, J.E. . Applied Mathematics Dept.)

    1994-05-01

    The authors describe and analyze certain V-cycle multigrid algorithms with forms defined by numerical quadrature applied to the approximation of symmetric second-order elliptic boundary value problems. This approach can be used for the efficient solution of finite element systems resulting from numerical quadrature as well as systems arising from finite difference discretizations. The results are based on a regularity free theory and hence apply to meshes with local grid refinement as well as the quasi-uniform case. It is shown that uniform (independent of the number of levels) convergence rates often hold for appropriately defined V-cycle algorithms with as few as one smoothing per grid. These results hold even on applications without full elliptic regularity, e.g., a domain in R[sup 2] with a crack.

  5. Thickness determination in textile material design: dynamic modeling and numerical algorithms

    NASA Astrophysics Data System (ADS)

    Xu, Dinghua; Ge, Meibao

    2012-03-01

    Textile material design is of paramount importance in the study of functional clothing design. It is therefore important to determine the dynamic heat and moisture transfer characteristics in the human body-clothing-environment system, which directly determine the heat-moisture comfort level of the human body. Based on a model of dynamic heat and moisture transfer with condensation in porous fabric at low temperature, this paper presents a new inverse problem of textile thickness determination (IPTTD). Adopting the idea of the least-squares method, we formulate the IPTTD into a function minimization problem. By means of the finite-difference method, quasi-solution method and direct search method for one-dimensional minimization problems, we construct iterative algorithms of the approximated solution for the IPTTD. Numerical simulation results validate the formulation of the IPTTD and demonstrate the effectiveness of the proposed numerical algorithms.

  6. Numerical advection algorithms and their role in atmospheric transport and chemistry models

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.

    1987-01-01

    During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.

  7. Numerical advection algorithms and their role in atmospheric transport and chemistry models

    NASA Astrophysics Data System (ADS)

    Rood, Richard B.

    1987-02-01

    During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.

  8. A universal framework for non-deteriorating time-domain numerical algorithms in Maxwell's electrodynamics

    NASA Astrophysics Data System (ADS)

    Fedoseyev, A.; Kansa, E. J.; Tsynkov, S.; Petropavlovskiy, S.; Osintcev, M.; Shumlak, U.; Henshaw, W. D.

    2016-10-01

    We present the implementation of the Lacuna method, that removes a key diffculty that currently hampers many existing methods for computing unsteady electromagnetic waves on unbounded regions. Numerical accuracy and/or stability may deterio-rate over long times due to the treatment of artificial outer boundaries. We describe a developed universal algorithm and software that correct this problem by employing the Huygens' principle and lacunae of Maxwell's equations. The algorithm provides a temporally uniform guaranteed error bound (no deterioration at all), and the software will enable robust electromagnetic simulations in a high-performance computing environment. The methodology applies to any geometry, any scheme, and any boundary condition. It eliminates the long-time deterioration regardless of its origin and how it manifests itself. In retrospect, the lacunae method was first proposed by V. Ryaben'kii and subsequently developed by S. Tsynkov. We have completed development of an innovative numerical methodology for high fidelity error-controlled modeling of a broad variety of electromagnetic and other wave phenomena. Proof-of-concept 3D computations have been conducted that con-vincingly demonstrate the feasibility and effciency of the proposed approach. Our algorithms are being implemented as robust commercial software tools in a standalone module to be combined with existing numerical schemes in several widely used computational electromagnetic codes.

  9. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  10. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  11. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  12. Rayleigh Wave Numerical Dispersion in a 3D Finite-Difference Algorithm

    NASA Astrophysics Data System (ADS)

    Preston, L. A.; Aldridge, D. F.

    2010-12-01

    A Rayleigh wave propagates laterally without dispersion in the vicinity of the plane stress-free surface of a homogeneous and isotropic elastic halfspace. The phase speed is independent of frequency and depends only on the Poisson ratio of the medium. However, after temporal and spatial discretization, a Rayleigh wave simulated by a 3D staggered-grid finite-difference (FD) seismic wave propagation algorithm suffers from frequency- and direction-dependent numerical dispersion. The magnitude of this dispersion depends critically on FD algorithm implementation details. Nevertheless, proper gridding can control numerical dispersion to within an acceptable level, leading to accurate Rayleigh wave simulations. Many investigators have derived dispersion relations appropriate for body wave propagation by various FD algorithms. However, the situation for surface waves is less well-studied. We have devised a numerical search procedure to estimate Rayleigh phase speed and group speed curves for 3D O(2,2) and O(2,4) staggered-grid FD algorithms. In contrast with the continuous time-space situation (where phase speed is obtained by extracting the appropriate root of the Rayleigh cubic), we cannot develop a closed-form mathematical formula governing the phase speed. Rather, we numerically seek the particular phase speed that leads to a solution of the discrete wave propagation equations, while holding medium properties, frequency, horizontal propagation direction, and gridding intervals fixed. Group speed is then obtained by numerically differentiating the phase speed with respect to frequency. The problem is formulated for an explicit stress-free surface positioned at two different levels within the staggered spatial grid. Additionally, an interesting variant involving zero-valued medium properties above the surface is addressed. We refer to the latter as an implicit free surface. Our preliminary conclusion is that an explicit free surface, implemented with O(4) spatial FD

  13. Basic and advanced numerical performances relate to mathematical expertise but are fully mediated by visuospatial skills.

    PubMed

    Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi

    2016-09-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. (PsycINFO Database Record

  14. Basic and Advanced Numerical Performances Relate to Mathematical Expertise but Are Fully Mediated by Visuospatial Skills

    PubMed Central

    2016-01-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. PMID:26913930

  15. Basic and advanced numerical performances relate to mathematical expertise but are fully mediated by visuospatial skills.

    PubMed

    Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi

    2016-09-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. (PsycINFO Database Record PMID:26913930

  16. Simulation of ammonium and chromium transport in porous media using coupling scheme of a numerical algorithm and a stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Schüttrumpf, Holger; Köngeter, Jürgen; Becker, Torsten; Palani, Sundarambal

    2009-01-01

    The migration of the species of chromium and ammonium in groundwater and their effective remediation depend on the various hydro-geological characteristics of the system. The computational modeling of the reactive transport problems is one of the most preferred tools for field engineers in groundwater studies to make decision in pollution abatement. The analytical models are less modular in nature with low computational demand where the modification is difficult during the formulation of different reactive systems. Numerical models provide more detailed information with high computational demand. Coupling of linear partial differential Equations (PDE) for the transport step with a non-linear system of ordinary differential equations (ODE) for the reactive step is the usual mode of solving a kinetically controlled reactive transport equation. This assumption is not appropriate for a system with low concentration of species such as chromium. Such reaction systems can be simulated using a stochastic algorithm. In this paper, a finite difference scheme coupled with a stochastic algorithm for the simulation of the transport of ammonium and chromium in subsurface media has been detailed.

  17. A numerical technique for calculation of the noise of high-speed propellers with advanced blade geometry

    NASA Technical Reports Server (NTRS)

    Nystrom, P. A.; Farassat, F.

    1980-01-01

    A numerical technique and computer program were developed for the prediction of the noise of propellers with advanced geometry. The blade upper and lower surfaces are described by a curvilinear coordinate system, which was also used to divide the blade surfaces into panels. Two different acoustic formulations in the time domain were used to improve the speed and efficiency of the noise calculations: an acoustic formualtion with the Doppler factor singularity for panels moving at subsonic speeds and the collapsing sphere formulation for panels moving at transonic or supersonic speeds. This second formulation involves a sphere which is centered at the observer position and whose radius decreases at the speed of sound. The acoustic equation consisted of integrals over the curve of intersection for both the sphere and the panels on the blade. Algorithms used in some parts of the computer program are discussed. Comparisons with measured acoustic data for two model high speed propellers with advanced geometry are also presented.

  18. Numerical stability of relativistic beam multidimensional PIC simulations employing the Esirkepov algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc

    2013-09-01

    Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.

  19. Advanced Algorithms for Local Routing Strategy on Complex Networks

    PubMed Central

    Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K.; Dong, Chuanfei; Miao, Lixin; Wang, Binghong

    2016-01-01

    Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502

  20. Advanced Algorithms for Local Routing Strategy on Complex Networks.

    PubMed

    Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K; Dong, Chuanfei; Miao, Lixin; Wang, Binghong

    2016-01-01

    Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70-90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502

  1. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.

    PubMed

    Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  2. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization

    PubMed Central

    Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  3. Advanced information processing system: Hosting of advanced guidance, navigation and control algorithms on AIPS using ASTER

    NASA Technical Reports Server (NTRS)

    Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John

    1994-01-01

    This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.

  4. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

    PubMed Central

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-01-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

  5. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  6. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.

    PubMed

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  7. New Concepts in Breast Cancer Emerge from Analyzing Clinical Data Using Numerical Algorithms

    PubMed Central

    Retsky, Michael

    2009-01-01

    A small international group has recently challenged fundamental concepts in breast cancer. As a guiding principle in therapy, it has long been assumed that breast cancer growth is continuous. However, this group suggests tumor growth commonly includes extended periods of quasi-stable dormancy. Furthermore, surgery to remove the primary tumor often awakens distant dormant micrometastases. Accordingly, over half of all relapses in breast cancer are accelerated in this manner. This paper describes how a numerical algorithm was used to come to these conclusions. Based on these findings, a dormancy preservation therapy is proposed. PMID:19440287

  8. Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm

    NASA Technical Reports Server (NTRS)

    Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.

    1990-01-01

    Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  9. A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries

    SciTech Connect

    Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P

    2003-12-15

    We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.

  10. Two-dimensional atmospheric transport and chemistry model: numerical experiments with a new advection algorithm.

    PubMed

    Shia, R L; Ha, Y L; Wen, J S; Yung, Y L

    1990-05-20

    Extensive testing of the advective scheme, proposed by Prather (1986), has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. We generalize the original scheme to include higher-order moments. In addition, we show how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  11. A Flexible Reservation Algorithm for Advance Network Provisioning

    SciTech Connect

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-04-12

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.

  12. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  13. Delicate visual artifacts of advanced digital video processing algorithms

    NASA Astrophysics Data System (ADS)

    Nicolas, Marina M.; Lebowsky, Fritz

    2005-03-01

    With the incoming of digital TV, sophisticated video processing algorithms have been developed to improve the rendering of motion or colors. However, the perceived subjective quality of these new systems sometimes happens to be in conflict with the objective measurable improvement we expect to get. In this presentation, we show examples where algorithms should visually improve the skin tone rendering of decoded pictures under normal conditions, but surprisingly fail, when the quality of mpeg encoding drops below a just noticeable threshold. In particular, we demonstrate that simple objective criteria used for the optimization, such as SAD, PSNR or histogram sometimes fail, partly because they are defined on a global scale, ignoring local characteristics of the picture content. We then integrate a simple human visual model to measure potential artifacts with regard to spatial and temporal variations of the objects' characteristics. Tuning some of the model's parameters allows correlating the perceived objective quality with compression metrics of various encoders. We show the evolution of our reference parameters in respect to the compression ratios. Finally, using the output of the model, we can control the parameters of the skin tone algorithm to reach an improvement in overall system quality.

  14. Thermodynamically Consistent Physical Formulation and an Efficient Numerical Algorithm for Incompressible N-Phase Flows

    NASA Astrophysics Data System (ADS)

    Dong, Suchuan

    2015-11-01

    This talk focuses on simulating the motion of a mixture of N (N>=2) immiscible incompressible fluids with given densities, dynamic viscosities and pairwise surface tensions. We present an N-phase formulation within the phase field framework that is thermodynamically consistent, in the sense that the formulation satisfies the conservations of mass/momentum, the second law of thermodynamics and Galilean invariance. We also present an efficient algorithm for numerically simulating the N-phase system. The algorithm has overcome the issues caused by the variable coefficient matrices associated with the variable mixture density/viscosity and the couplings among the (N-1) phase field variables and the flow variables. We compare simulation results with the Langmuir-de Gennes theory to demonstrate that the presented method produces physically accurate results for multiple fluid phases. Numerical experiments will be presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts to demonstrate the capabilities of the method for studying the interactions among multiple types of fluid interfaces. Support from NSF and ONR is gratefully acknowledged.

  15. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  16. Computer architecture for efficient algorithmic executions in real-time systems: new technology for avionics systems and advanced space vehicles

    SciTech Connect

    Carroll, C.C.; Youngblood, J.N.; Saha, A.

    1987-12-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  17. Advanced entry guidance algorithm with landing footprint computation

    NASA Astrophysics Data System (ADS)

    Leavitt, James Aaron

    The design and performance evaluation of an entry guidance algorithm for future space transportation vehicles is presented. The algorithm performs two functions: on-board trajectory planning and trajectory tracking. The planned longitudinal path is followed by tracking drag acceleration, as is done by the Space Shuttle entry guidance. Unlike the Shuttle entry guidance, lateral path curvature is also planned and followed. A new trajectory planning function for the guidance algorithm is developed that is suitable for suborbital entry and that significantly enhances the overall performance of the algorithm for both orbital and suborbital entry. In comparison with the previous trajectory planner, the new planner produces trajectories that are easier to track, especially near the upper and lower drag boundaries and for suborbital entry. The new planner accomplishes this by matching the vehicle's initial flight path angle and bank angle, and by enforcing the full three-degree-of-freedom equations of motion with control derivative limits. Insights gained from trajectory optimization results contribute to the design of the new planner, giving it near-optimal downrange and crossrange capabilities. Planned trajectories and guidance simulation results are presented that demonstrate the improved performance. Based on the new planner, a method is developed for approximating the landing footprint for entry vehicles in near real-time, as would be needed for an on-board flight management system. The boundary of the footprint is constructed from the endpoints of extreme downrange and crossrange trajectories generated by the new trajectory planner. The footprint algorithm inherently possesses many of the qualities of the new planner, including quick execution, the ability to accurately approximate the vehicle's glide capabilities, and applicability to a wide range of entry conditions. Footprints can be generated for orbital and suborbital entry conditions using a pre

  18. On the complexity of classical and quantum algorithms for numerical problems in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Bessen, Arvid J.

    Our understanding of complex quantum mechanical processes is limited by our inability to solve the equations that govern them except for simple cases. Numerical simulation of quantum systems appears to be our best option to understand, design and improve quantum systems. It turns out, however, that computational problems in quantum mechanics are notoriously difficult to treat numerically. The computational time that is required often scales exponentially with the size of the problem. One of the most radical approaches for treating quantum problems was proposed by Feytiman in 1982 [46]: he suggested that quantum mechanics itself showed a promising way to simulate quantum physics. This idea, the so called quantum computer, showed its potential convincingly in one important regime with the development of Shor's integer factorization algorithm which improves exponentially on the best known classical algorithm. In this thesis we explore six different computational problems from quantum mechanics, study their computational complexity and try to find ways to remedy them. In the first problem we investigate the reasons behind the improved performance of Shor's and similar algorithms. We show that the key quantum part in Shor's algorithm, the quantum phase estimation algorithm, achieves its good performance through the use of power queries and we give lower bounds for all phase estimation algorithms that use power queries that match the known upper bounds. Our research indicates that problems that allow the use of power queries will achieve similar exponential improvements over classical algorithms. We then apply our lower bound technique for power queries to the Sturm-Liouville eigenvalue problem and show matching lower bounds to the upper bounds of Papageorgiou and Wozniakowski [85]. It seems to be very difficult, though, to find nontrivial instances of the Sturm-Lionville problem for which power queries can be simulated efficiently. A quantum computer differs from a

  19. An efficient advancing front algorithm for Delaunay triangulation

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1991-01-01

    There has been some recent interest in fluid dynamics calculations on unstructured meshes. One method of unstructured mesh generation involves Delaunay triangulation. This method has certain advantages but it can be expensive to implement. Furthermore, there can be problems with crossing grid lines near boundaries. A method shown here avoids many of the robustness and efficiency problems previously associated with Delaunay triangulation. As an added feature, a simple algorithm is shown which allows removal of diagonal edges from cells that are nearly rectangular. This can result in significant savings in the cost per iteration of a flow solver using this grid.

  20. Advanced algorithms for radiographic material discrimination and inspection system design

    NASA Astrophysics Data System (ADS)

    Gilbert, Andrew J.; McDonald, Benjamin S.; Deinert, Mark R.

    2016-10-01

    X-ray and neutron radiography are powerful tools for non-invasively inspecting the interior of objects. However, current methods are limited in their ability to differentiate materials when multiple materials are present, especially within large and complex objects. Past work has demonstrated that the spectral shift that X-ray beams undergo in traversing an object can be used to detect and quantify nuclear materials. The technique uses a spectrally sensitive detector and an inverse algorithm that varies the composition of the object until the X-ray spectrum predicted by X-ray transport matches the one measured. Here we show that this approach can be adapted to multi-mode radiography, with energy integrating detectors, and that the Cramér-Rao lower bound can be used to choose an optimal set of inspection modes a priori. We consider multi-endpoint X-ray radiography alone, or in combination with neutron radiography using deuterium-deuterium (DD) or deuterium-tritium (DT) sources. We show that for an optimal mode choice, the algorithm can improve discrimination between high-Z materials, specifically between tungsten and plutonium, and estimate plutonium mass within a simulated nuclear material storage system to within 1%.

  1. New Advances in the Study of the Proximal Point Algorithm

    NASA Astrophysics Data System (ADS)

    Moroşanu, Gheorghe

    2010-09-01

    Consider in a real Hilbert space H the inexact, Halpern-type, proximal point algorithm xn+1 = αnu+(1-αn)Jβnxn+en, n = 0,1,…, (H—PPA) where u, x∈H are given points, Jβn = (I+βna) for a given maximal monotone operator A, and (en) is the error sequence, under new assumptions on αn∈(0,1) and βn∈(0,1). Several strong convergence results for the H—PPA are presented under the general condition that the error sequence converges strongly to zero, thus improving the classical Rockafellar's summability condition on (‖en‖) that has been extensively used so far for different versions of the proximal point algorithm. Our results extend and improve some recent ones. These results can be applied to approximate minimizers of convex functionals. Convergence rate estimates are established for a sequence approximating the minimum value of such a functional.

  2. Chaotic algorithms: A numerical exploration of the dynamics of a stiff photoconductor model

    SciTech Connect

    Markus, A.S. de

    1997-04-01

    The photoconducting property of semiconductors leads, in general, to a very complex kinetics for the charge carriers due to the non-equilibrium processes involved. In a semi-conductor with one type of trap, the dynamics of the photo-conducting process are described by a set of ordinary coupled non-linear differential equations given by where n and p are the free electron and hole densities, and m the trapped electron density at time t. So far, there is no known closed solution for these set of non-linear differential equations, and therefore, numerical integration techniques have to be employed, as, for example, the standard procedure of the Runge-Kutta (RK) method. Now then, each one of the mechanisms of generation, recombination, and trapping has its own lifetime, which means that different time constants are to be expected in the time dependent behavior of the photocurrent. Thus, depending on the parameters of the model, the system may become stiff if the time scales between n, m, and p separate considerably. This situation may impose a considerable stress upon a fixed step numerical algorithm as the RK, which may produce then unreliable results, and other methods have to be considered. Therefore, the purpose of this note is to examine, for a critical range of parameters, the results of the numerical integration of the stiff system obtained by standard numerical schemes, such as the single-step fourth-order Runge-Kutta method and the multistep Gear method, the latter being appropriate for a rigid system of equations. 7 refs., 2 figs.

  3. BOOK REVIEW: Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    NASA Astrophysics Data System (ADS)

    Katsaounis, T. D.

    2005-02-01

    The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using

  4. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    NASA Astrophysics Data System (ADS)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate

  5. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  6. Advances in contact algorithms and their application to tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.

    1988-01-01

    Currently used techniques for tire contact analysis are reviewed. Discussion focuses on the different techniques used in modeling frictional forces and the treatment of contact conditions. A status report is presented on a new computational strategy for the modeling and analysis of tires, including the solution of the contact problem. The key elements of the proposed strategy are: (1) use of semianalytic mixed finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) use of perturbed Lagrangian formulation for the determination of the contact area and pressure; and (3) application of multilevel iterative procedures and reduction techniques to generate the response of the tire. Numerical results are presented to demonstrate the effectiveness of a proposed procedure for generating the tire response associated with different Fourier harmonics.

  7. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    NASA Astrophysics Data System (ADS)

    Dong, S.

    2015-02-01

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N ⩾ 2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N - 1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N - 1) strongly-coupled phase field equations for general order parameters into 2 (N - 1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir-de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  8. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    SciTech Connect

    Dong, S.

    2015-02-15

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  9. Advanced Numerical Imaging Procedure Accounting for Non-Ideal Effects in GPR Scenarios

    NASA Astrophysics Data System (ADS)

    Comite, Davide; Galli, Alessandro; Catapano, Ilaria; Soldovieri, Francesco

    2015-04-01

    advanced implementation have also been tested by introducing 'errors' on the knowledge of the background medium permittivity, by simulating the presence of one or more layers, and by choosing different models of the surface roughness. The impact of these issues on the performance of both the conventional procedure and the advanced one will be extensively highlighted and discussed at the conference. [1] G. Valerio et al., "GPR detectability of rocks in a Martian-like shallow subsoil: A numerical approach," Plan. Sp. Sci., vol. 62, pp. 31-40, 2012. [2] A. Galli et al., "3D imaging of buried dielectric targets with a tomographic microwave approach applied to GPR synthetic data," Int. J. Antennas Propag., art. ID 610389, 10 pp., 2013 [3] F. Soldovieri et al., "A linear inverse scattering algorithm for realistic GPR applications," Near Surface Geophysics, 5 (1), pp. 29-42, 2007.

  10. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  11. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  12. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  13. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  14. Numerical Investigation of a Cascaded Longitudinal Space-Charge Amplifier at the Fermilab's Advanced Superconducting Test Accelerator

    SciTech Connect

    Halavanau, A.; Piot, P.

    2015-06-01

    In a cascaded longitudinal space-charge amplifier (LSCA), initial density noise in a relativistic e-beam is amplified via the interplay of longitudinal space charge forces and properly located dispersive sections. This type of amplification process was shown to potentially result in large final density modulations [1] compatible with the production of broadband electromagnetic radiation. The technique was recently demonstrated in the optical domain [2]. In this paper we investigate, via numerical simulations, the performances of a cascaded LSCA beamline at the Fermilab’s Advanced Superconducting Test Accelerator (ASTA). We especially explore the properties of the produced broadband radiation. Our studies have been conducted with a grid-less three-dimensional space-charge algorithm.

  15. An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys

    SciTech Connect

    Becker, R; Stolken, J; Jannetti, C; Bassani, J

    2003-10-16

    Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numerical simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.

  16. Environmental Monitoring Networks Optimization Using Advanced Active Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Volpi, Michele; Copa, Loris

    2010-05-01

    The problem of environmental monitoring networks optimization (MNO) belongs to one of the basic and fundamental tasks in spatio-temporal data collection, analysis, and modeling. There are several approaches to this problem, which can be considered as a design or redesign of monitoring network by applying some optimization criteria. The most developed and widespread methods are based on geostatistics (family of kriging models, conditional stochastic simulations). In geostatistics the variance is mainly used as an optimization criterion which has some advantages and drawbacks. In the present research we study an application of advanced techniques following from the statistical learning theory (SLT) - support vector machines (SVM) and the optimization of monitoring networks when dealing with a classification problem (data are discrete values/classes: hydrogeological units, soil types, pollution decision levels, etc.) is considered. SVM is a universal nonlinear modeling tool for classification problems in high dimensional spaces. The SVM solution is maximizing the decision boundary between classes and has a good generalization property for noisy data. The sparse solution of SVM is based on support vectors - data which contribute to the solution with nonzero weights. Fundamentally the MNO for classification problems can be considered as a task of selecting new measurement points which increase the quality of spatial classification and reduce the testing error (error on new independent measurements). In SLT this is a typical problem of active learning - a selection of the new unlabelled points which efficiently reduce the testing error. A classical approach (margin sampling) to active learning is to sample the points closest to the classification boundary. This solution is suboptimal when points (or generally the dataset) are redundant for the same class. In the present research we propose and study two new advanced methods of active learning adapted to the solution of

  17. A rescaling algorithm for the numerical solution to the porous medium equation in a two-component domain

    NASA Astrophysics Data System (ADS)

    Filo, Ján; Hundertmark-Zaušková, Anna

    2016-10-01

    The aim of this paper is to design a rescaling algorithm for the numerical solution to the system of two porous medium equations defined on two different components of the real line, that are connected by the nonlinear contact condition. The algorithm is based on the self-similarity of solutions on different scales and it presents a space-time adaptable method producing more exact numerical solution in the area of the interface between the components, whereas the number of grid points stays fixed.

  18. A modeling and numerical algorithm for thermoporomechanics in multiple porosity media for naturally fractured reservoirs

    NASA Astrophysics Data System (ADS)

    Kim, J.; Sonnenthal, E. L.; Rutqvist, J.

    2011-12-01

    Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator

  19. Left Ventricular Flow Analysis: Recent Advances in Numerical Methods and Applications in Cardiac Ultrasound

    PubMed Central

    Borazjani, Iman; Westerdale, John; McMahon, Eileen M.; Rajaraman, Prathish K.; Heys, Jeffrey J.

    2013-01-01

    The left ventricle (LV) pumps oxygenated blood from the lungs to the rest of the body through systemic circulation. The efficiency of such a pumping function is dependent on blood flow within the LV chamber. It is therefore crucial to accurately characterize LV hemodynamics. Improved understanding of LV hemodynamics is expected to provide important clinical diagnostic and prognostic information. We review the recent advances in numerical and experimental methods for characterizing LV flows and focus on analysis of intraventricular flow fields by echocardiographic particle image velocimetry (echo-PIV), due to its potential for broad and practical utility. Future research directions to advance patient-specific LV simulations include development of methods capable of resolving heart valves, higher temporal resolution, automated generation of three-dimensional (3D) geometry, and incorporating actual flow measurements into the numerical solution of the 3D cardiovascular fluid dynamics. PMID:23690874

  20. Ductile damage prediction in metal forming processes: Advanced modeling and numerical simulation

    NASA Astrophysics Data System (ADS)

    Saanouni, K.

    2013-05-01

    This paper describes the needs required in modern virtual metal forming including both sheet and bulk metal forming of mechanical components. These concern the advanced modeling of thermo-mechanical behavior including the multiphysical phenomena and their interaction or strong coupling, as well as the associated numerical aspects using fully adaptive simulation strategies. First a survey of advanced constitutive equations accounting for the main thermomechanical phenomena as the thermo-elasto-plastic finite strains with isotropic and kinematic hardenings fully coupled with ductile damage will be presented. Only the macroscopic phenomenological approach with state variables (monoscale approach) will be discussed in the general framework of the rational thermodynamics for generalized micromorphic continua. The micro-macro (multi-scales approach) in the framework of polycrystalline inelasticity is not presented here for the sake of shortness but will be presented during the oral presentation. The main numerical aspects related to the resolution of the associated initial and boundary value problem will be outlined. A fully adaptive numerical methodology will be briefly described and some numerical examples will be given in order to show the high predictive capabilities of this adaptive methodology for virtual metal forming simulations.

  1. Biphasic indentation of articular cartilage--II. A numerical algorithm and an experimental study.

    PubMed

    Mow, V C; Gibbs, M C; Lai, W M; Zhu, W B; Athanasiou, K A

    1989-01-01

    Part I (Mak et al., 1987, J. Biomechanics 20, 703-714) presented the theoretical solutions for the biphasic indentation of articular cartilage under creep and stress-relaxation conditions. In this study, using the creep solution, we developed an efficient numerical algorithm to compute all three material coefficients of cartilage in situ on the joint surface from the indentation creep experiment. With this method we determined the average values of the aggregate modulus. Poisson's ratio and permeability for young bovine femoral condylar cartilage in situ to be HA = 0.90 MPa, vs = 0.39 and k = 0.44 x 10(-15) m4/Ns respectively, and those for patellar groove cartilage to be HA = 0.47 MPa, vs = 0.24, k = 1.42 x 10(-15) m4/Ns. One surprising finding from this study is that the in situ Poisson's ratio of cartilage (0.13-0.45) may be much less than those determined from measurements performed on excised osteochondral plugs (0.40-0.49) reported in the literature. We also found the permeability of patellar groove cartilage to be several times higher than femoral condyle cartilage. These findings may have important implications on understanding the functional behavior of cartilage in situ and on methods used to determine the elastic moduli of cartilage using the indentation experiments.

  2. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  3. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    ERIC Educational Resources Information Center

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  4. Advances in Analytical and Numerical Dispersion Modeling of Pollutants Releasing from an Area-source

    NASA Astrophysics Data System (ADS)

    Nimmatoori, Praneeth

    The air quality near agricultural activities such as tilling, plowing, harvesting, and manure application is of main concern because they release fine particulate matter into the atmosphere. These releases are modeled as area-sources in the air quality modeling research. None of the currently available dispersion models relate and incorporate physical characteristics and meteorological conditions for modeling the dispersion and deposition of particulates emitting from such area-sources. This knowledge gap was addressed by developing the advanced analytical and numerical methods for modeling the dispersion of particulate matter. The development, application, and evaluation of new dispersion modeling methods are discussed in detail in this dissertation. In the analytical modeling, a ground-level area source analytical dispersion model known as particulate matter deposition -- PMD was developed for predicting the concentrations of different particle sizes. Both the particle dynamics (particle physical characteristics) and meteorological conditions which have significant effect on the dispersion of particulates were related and incorporated in the PMD model using the formulations of particle gravitational settling and dry deposition velocities. The modeled particle size concentrations of the PMD model were evaluated statistically after applying it to particulates released from a biosolid applied agricultural field. The evaluation of the PMD model using the statistical criteria concluded effective and successful inclusion of dry deposition theory for modeling particulate matter concentrations. A comprehensive review of analytical area-source dispersion models, which do not account for dry deposition and treat pollutants as gases, was conducted and determined three models -- the Shear, the Parker, and the Smith. A statistical evaluation of these dispersion models was conducted after applying them to two different field data sets and the statistical results concluded that

  5. Energy balance in advanced audio coding encoder bit-distortion loop algorithm

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz; Pastuszak, Grzegorz

    2013-10-01

    The paper presents two techniques of balancing energy in ScaleFactor bands for Advanced Audio Coding. The techniques allows the AAC encoder to get a better audio quality. The first one modifies Scale Factors assigned to each band after the quantization whereas the second finds and changes offsets in the quantization - just before rounding down. The implementations of the algorithms have been tested and results discussed. Results show that these techniques significantly improve the quality. At last hardware implementation possibilities are discussed.

  6. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    SciTech Connect

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea; Koehler, Katrina Elizabeth; Henzl, Vladimir; Henzlova, Daniela; Parker, Robert Francis; Croft, Stephen

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  7. Numerical Roll Reversal Predictor Corrector Aerocapture and Precision Landing Guidance Algorithms for the Mars Surveyor Program 2001 Missions

    NASA Technical Reports Server (NTRS)

    Powell, Richard W.

    1998-01-01

    This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.

  8. Numerical study of Alfvén eigenmodes in the Experimental Advanced Superconducting Tokamak

    SciTech Connect

    Hu, Youjun; Li, Guoqiang; Yang, Wenjun; Zhou, Deng; Ren, Qilong; Gorelenkov, N. N.; Cai, Huishan

    2014-05-15

    Alfvén eigenmodes in up-down asymmetric tokamak equilibria are studied by a new magnetohydrodynamic eigenvalue code. The code is verified with the NOVA code for the Solovév equilibrium and then is used to study Alfvén eigenmodes in a up-down asymmetric equilibrium of the Experimental Advanced Superconducting Tokamak. The frequency and mode structure of toroidicity-induced Alfvén eigenmodes are calculated. It is demonstrated numerically that up-down asymmetry induces phase variation in the eigenfunction across the major radius on the midplane.

  9. Simulation studies of the impact of advanced observing systems on numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Kalnay, E.; Susskind, J.; Reuter, D.; Baker, W. E.; Halem, M.

    1984-01-01

    To study the potential impact of advanced passive sounders and lidar temperature, pressure, humidity, and wind observing systems on large-scale numerical weather prediction, a series of realistic simulation studies between the European Center for medium-range weather forecasts, the National Meteorological Center, and the Goddard Laboratory for Atmospheric Sciences is conducted. The project attempts to avoid the unrealistic character of earlier simulation studies. The previous simulation studies and real-data impact tests are reviewed and the design of the current simulation system is described. Consideration is given to the simulation of observations of space-based sounding systems.

  10. A numerical algorithm for stress integration of a fiber-fiber kinetics model with Coulomb friction for connective tissue

    NASA Astrophysics Data System (ADS)

    Kojic, M.; Mijailovic, S.; Zdravkovic, N.

    Complex behaviour of connective tissue can be modeled by a fiber-fiber kinetics material model introduced in Mijailovic (1991), Mijailovic et al. (1993). The model is based on the hypothesis of sliding of elastic fibers with Coulomb and viscous friction. The main characteristics of the model were verified experimentally in Mijailovic (1991), and a numerical procedure for one-dimensional tension was developed considering sliding as a contact problem between bodies. In this paper we propose a new and general numerical procedure for calculation of the stress-strain law of the fiber-fiber kinetics model in case of Coulomb friction. Instead of using a contact algorithm (Mijailovic 1991), which is numerically inefficient and never enough reliable, here the history of sliding along the sliding length is traced numerically through a number of segments along the fiber. The algorithm is simple, efficient and reliable and provides solutions for arbitrary cyclic loading, including tension, shear, and tension and shear simultaneously, giving hysteresis loops typical for soft tissue response. The model is built in the finite element technique, providing the possibility of its application to general and real problems. Solved examples illustrate the main characteristics of the model and of the developed numerical method, as well as its applicability to practical problems. Accuracy of some results, for the simple case of uniaxial loading, is verified by comparison with analytical solutions.

  11. CoFlame: A refined and validated numerical algorithm for modeling sooting laminar coflow diffusion flames

    NASA Astrophysics Data System (ADS)

    Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.

    2016-10-01

    Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated. Catalogue identifier: AFAU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 94964 No. of bytes in distributed program, including test data, etc.: 6242986 Distribution format: tar.gz Programming language: Fortran 90, MPI. (Requires an Intel compiler). Computer: Workstations

  12. Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula

    2012-01-01

    AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,

  13. Advanced Numerical-Algebraic Thinking: Constructing the Concept of Covariation as a Prelude to the Concept of Function

    ERIC Educational Resources Information Center

    Hitt, Fernando; Morasse, Christian

    2009-01-01

    Introduction: In this document we stress the importance of developing in children a structure for advanced numerical-algebraic thinking that can provide an element of control when solving mathematical situations. We analyze pupils' conceptions that induce errors in algebra due to a lack of control in connection with their numerical thinking. We…

  14. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  15. Advanced MHD Algorithm for Solar and Space Science: lst Year Semi Annual Progress Report

    NASA Technical Reports Server (NTRS)

    Schnack, Dalton D.; Lionello, Roberto

    2003-01-01

    We report progress for the development of MH4D for the first and second quarters of FY2004, December 29, 2002 - June 6, 2003. The present version of MH4D can now solve the full viscous and resistive MHD equations using either an explicit or a semi-implicit time advancement algorithm. In this report we describe progress in the following areas. During the two last quarters we have presented poster at the EGS-AGU-EUG Joint Assembly in Nice, France, April 6-11, 2003, and a poster at the 2003 International Sherwood Theory Conference in Corpus Christi, Texas, April 28-30 2003. In the area of code development, we have implemented the MHD equations and the semi-implicit algorithm. The new features have been tested.

  16. Advances in methods and algorithms in a modern quantum chemistry program package.

    PubMed

    Shao, Yihan; Molnar, Laszlo Fusti; Jung, Yousung; Kussmann, Jörg; Ochsenfeld, Christian; Brown, Shawn T; Gilbert, Andrew T B; Slipchenko, Lyudmila V; Levchenko, Sergey V; O'Neill, Darragh P; DiStasio, Robert A; Lochan, Rohini C; Wang, Tao; Beran, Gregory J O; Besley, Nicholas A; Herbert, John M; Lin, Ching Yeh; Van Voorhis, Troy; Chien, Siu Hung; Sodt, Alex; Steele, Ryan P; Rassolov, Vitaly A; Maslen, Paul E; Korambath, Prakashan P; Adamson, Ross D; Austin, Brian; Baker, Jon; Byrd, Edward F C; Dachsel, Holger; Doerksen, Robert J; Dreuw, Andreas; Dunietz, Barry D; Dutoi, Anthony D; Furlani, Thomas R; Gwaltney, Steven R; Heyden, Andreas; Hirata, So; Hsu, Chao-Ping; Kedziora, Gary; Khalliulin, Rustam Z; Klunzinger, Phil; Lee, Aaron M; Lee, Michael S; Liang, Wanzhen; Lotan, Itay; Nair, Nikhil; Peters, Baron; Proynov, Emil I; Pieniazek, Piotr A; Rhee, Young Min; Ritchie, Jim; Rosta, Edina; Sherrill, C David; Simmonett, Andrew C; Subotnik, Joseph E; Woodcock, H Lee; Zhang, Weimin; Bell, Alexis T; Chakraborty, Arup K; Chipman, Daniel M; Keil, Frerich J; Warshel, Arieh; Hehre, Warren J; Schaefer, Henry F; Kong, Jing; Krylov, Anna I; Gill, Peter M W; Head-Gordon, Martin

    2006-07-21

    Advances in theory and algorithms for electronic structure calculations must be incorporated into program packages to enable them to become routinely used by the broader chemical community. This work reviews advances made over the past five years or so that constitute the major improvements contained in a new release of the Q-Chem quantum chemistry package, together with illustrative timings and applications. Specific developments discussed include fast methods for density functional theory calculations, linear scaling evaluation of energies, NMR chemical shifts and electric properties, fast auxiliary basis function methods for correlated energies and gradients, equation-of-motion coupled cluster methods for ground and excited states, geminal wavefunctions, embedding methods and techniques for exploring potential energy surfaces. PMID:16902710

  17. Computationally Efficient Algorithms for Parameter Estimation and Uncertainty Propagation in Numerical Models of Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Townley, Lloyd R.; Wilson, John L.

    1985-12-01

    Finite difference and finite element methods are frequently used to study aquifer flow; however, additional analysis is required when model parameters, and hence predicted heads are uncertain. Computational algorithms are presented for steady and transient models in which aquifer storage coefficients, transmissivities, distributed inputs, and boundary values may all be simultaneously uncertain. Innovative aspects of these algorithms include a new form of generalized boundary condition; a concise discrete derivation of the adjoint problem for transient models with variable time steps; an efficient technique for calculating the approximate second derivative during line searches in weighted least squares estimation; and a new efficient first-order second-moment algorithm for calculating the covariance of predicted heads due to a large number of uncertain parameter values. The techniques are presented in matrix form, and their efficiency depends on the structure of sparse matrices which occur repeatedly throughout the calculations. Details of matrix structures are provided for a two-dimensional linear triangular finite element model.

  18. Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm

    NASA Technical Reports Server (NTRS)

    Kato, Hiromasa; Tannehill, John C.; Mehta, Unmeel B.

    2003-01-01

    A new parabolized Navier-Stokes (PNS) algorithm has been developed to efficiently compute magnetohydrodynamic (MHD) flows in the low magnetic Reynolds number regime. In this regime, the electrical conductivity is low and the induced magnetic field is negligible compared to the applied magnetic field. The MHD effects are modeled by introducing source terms into the PNS equation which can then be solved in a very efficient manner. To account for upstream (elliptic) effects, the flowfields are computed using multiple streamwise sweeps with an iterated PNS algorithm. Turbulence has been included by modifying the Baldwin-Lomax turbulence model to account for MHD effects. The new algorithm has been used to compute both laminar and turbulent, supersonic, MHD flows over flat plates and supersonic viscous flows in a rectangular MHD accelerator. The present results are in excellent agreement with previous complete Navier-Stokes calculations.

  19. Numerical approach for the voloxidation process of an advanced spent fuel conditioning process (ACP)

    SciTech Connect

    Park, Byung Heung; Jeong, Sang Mun; Seo, Chung-Seok

    2007-07-01

    A voloxidation process is adopted as the first step of an advanced spent fuel conditioning process in order to prepare the SF oxide to be reduced in the following electrolytic reduction process. A semi-batch type voloxidizer was devised to transform a SF pellet into powder. In this work, a simple reactor model was developed for the purpose of correlating a gas phase flow rate with an operation time as a numerical approach. With an assumption that a solid phase and a gas phase are homogeneous in a reactor, a reaction rate for an oxidation was introduced into a mass balance equation. The developed equation can describe a change of an outlet's oxygen concentration including such a case that a gas flow is not sufficient enough to continue a reaction at its maximum reaction rate. (authors)

  20. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    NASA Astrophysics Data System (ADS)

    Bor, E.; Turduev, M.; Kurt, H.

    2016-08-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

  1. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light.

    PubMed

    Bor, E; Turduev, M; Kurt, H

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

  2. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light.

    PubMed

    Bor, E; Turduev, M; Kurt, H

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

  3. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    PubMed Central

    Bor, E.; Turduev, M.; Kurt, H.

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

  4. The role of numerical simulation for the development of an advanced HIFU system

    NASA Astrophysics Data System (ADS)

    Okita, Kohei; Narumi, Ryuta; Azuma, Takashi; Takagi, Shu; Matumoto, Yoichiro

    2014-10-01

    High-intensity focused ultrasound (HIFU) has been used clinically and is under clinical trials to treat various diseases. An advanced HIFU system employs ultrasound techniques for guidance during HIFU treatment instead of magnetic resonance imaging in current HIFU systems. A HIFU beam imaging for monitoring the HIFU beam and a localized motion imaging for treatment validation of tissue are introduced briefly as the real-time ultrasound monitoring techniques. Numerical simulations have a great impact on the development of real-time ultrasound monitoring as well as the improvement of the safety and efficacy of treatment in advanced HIFU systems. A HIFU simulator was developed to reproduce ultrasound propagation through the body in consideration of the elasticity of tissue, and was validated by comparison with in vitro experiments in which the ultrasound emitted from the phased-array transducer propagates through the acrylic plate acting as a bone phantom. As the result, the defocus and distortion of the ultrasound propagating through the acrylic plate in the simulation quantitatively agree with that in the experimental results. Therefore, the HIFU simulator accurately reproduces the ultrasound propagation through the medium whose shape and physical properties are well known. In addition, it is experimentally confirmed that simulation-assisted focus control of the phased-array transducer enables efficient assignment of the focus to the target. Simulation-assisted focus control can contribute to design of transducers and treatment planning.

  5. An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers

    SciTech Connect

    Balman, Mehmet; Kosar, Tevfik

    2010-05-20

    Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that are accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.

  6. Artificial algae algorithm with multi-light source for numerical optimization and applications.

    PubMed

    Uymaz, Sait Ali; Tezel, Gulay; Yel, Esra

    2015-12-01

    Artificial algae algorithm (AAA), which is one of the recently developed bio-inspired optimization algorithms, has been introduced by inspiration from living behaviors of microalgae. In AAA, the modification of the algal colonies, i.e. exploration and exploitation is provided with a helical movement. In this study, AAA was modified by implementing multi-light source movement and artificial algae algorithm with multi-light source (AAAML) version was established. In this new version, we propose the selection of a different light source for each dimension that is modified with the helical movement for stronger balance between exploration and exploitation. These light sources have been selected by tournament method and each light source are different from each other. This gives different solutions in the search space. The best of these three light sources provides orientation to the better region of search space. Furthermore, the diversity in the source space is obtained with the worst light source. In addition, the other light source improves the balance. To indicate the performance of AAA with new proposed operators (AAAML), experiments were performed on two different sets. Firstly, the performance of AAA and AAAML was evaluated on the IEEE-CEC'13 benchmark set. The second set was real-world optimization problems used in the IEEE-CEC'11. To verify the effectiveness and efficiency of the proposed algorithm, the results were compared with other state-of-the-art hybrid and modified algorithms. Experimental results showed that the multi-light source movement (MLS) increases the success of the AAA.

  7. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    NASA Technical Reports Server (NTRS)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  8. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  9. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  10. Numerical Studies of the Robustness the SRPF and DRPF Algorithms for the Control of Chaos when System Parameters Drift

    NASA Astrophysics Data System (ADS)

    Schroder, Kjell; Olsen, Thomas; Wiener, Richard

    2006-11-01

    Recursive Proportional Feedback (RPF) is an algorithm for the control of chaotic systems of great utility and ease of use. Control coefficients are determined from pre- control sampling of the system dynamics. We have adapted this method, in the spirit of the Extended Time-Delay Autosynchronization (ETDAS) method to seek minimal change from each previous value. The two methods so derived, Simple Recursive Proportional Feedback (SRPF) and Doubly Recursive Proportional Feedback (DRPF) have been studied in numerical simulations to determine their robustness when system parameters, other than that used for feedback, drift over time. We present evidence of the range over which each algorithm displays robustness against drift. Rollins et al, Phys. Rev. E 47, R780 (1993). Scolar et al, Phys. Rev. E 50, 3245 (1994).

  11. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  12. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  13. ORDMET: A General Algorithm for Constructing All Numerical Solutions to Ordered Metric Data

    ERIC Educational Resources Information Center

    McClelland, Gary; Coombs, Clyde H.

    1975-01-01

    ORDMET is applicable to structures obtained from additive conjoint measurement designs, unfolding theory, general Fechnerian scaling, types of multidimensional scaling, and ordinal multiple regression. A description is obtained of the space containing all possible numerical representations which can satisfy the structure, size, and shape of which…

  14. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  15. Data assimilation into a numerical equatorial ocean model. I. The model and the assimilation algorithm

    NASA Astrophysics Data System (ADS)

    Long, Robert Bryan; Thacker, William Carlisle

    1989-06-01

    Numerical modeling provides a powerful tool for the study of the dynamics of oceans and atmospheres. However, the relevance of modeling results can only be established by reference to observations of the system being modeled. Typical oceanic observation sets are sparse, asynoptic, of mixed type and limited reliability, generally inadequate in some respects, and redundant and inconsistent in others. An optimal procedure for interfacing such data sets with a numerical model is the so-called adjoint method. This procedure effectively assimilates the observations into a run of the numerical model by finding that solution to the model equations that best fits all observations made within some specified space-time interval. The method requires the construction of the adjoint of the numerical model, a process made practical for models of realistic complexity by the work of Thacker and Long. In the present paper, the first of two parts, we illustrate the application of Thacker and Long's approach by constructing a data-assimilating version of an equatorial ocean model incorporating the adjoint method. The model is subsequently run for 5 years to near-steady-state, and exhibits many of the features known to be characteristic of equatorial oceanic flows. Using the last 54 days of the run as a control, a set of simulated sea-level and subsurface-density observations are collected, then successfully assimilated to demonstrate that the procedure can recover the control run, given a generous amount of data. In part II we conduct a sequence of numerical experiments to explore the ability of more limited sets of observations to fix the state of the modeled ocean; in the process, we examine the potential value of sea-level data obtained via satellite altimetry.

  16. On the modeling of equilibrium twin interfaces in a single-crystalline magnetic shape memory alloy sample. II: numerical algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Jiong; Steinmann, Paul

    2016-05-01

    This is part II of this series of papers. The aim of the current paper was to solve the governing PDE system derived in part I numerically, such that the procedure of variant reorientation in a magnetic shape memory alloy (MSMA) sample can be simulated. The sample to be considered in this paper has a 3D cuboid shape and is subject to typical magnetic and mechanical loading conditions. To investigate the demagnetization effect on the sample's response, the surrounding space of the sample is taken into account. By considering the different properties of the independent variables, an iterative numerical algorithm is proposed to solve the governing system. The related mathematical formulas and some techniques facilitating the numerical calculations are introduced. Based on the results of numerical simulations, the distributions of some important physical quantities (e.g., magnetization, demagnetization field, and mechanical stress) in the sample can be determined. Furthermore, the properties of configurational force on the twin interfaces are investigated. By virtue of the twin interface movement criteria derived in part I, the whole procedure of magnetic field- or stress-induced variant reorientations in the MSMA sample can be properly simulated.

  17. A novel feedback algorithm for simulating controlled dynamics and confinement in the advanced reversed-field pinch

    SciTech Connect

    Dahlin, J.-E.; Scheffel, J.

    2005-06-15

    In the advanced reversed-field pinch (RFP), the current density profile is externally controlled to diminish tearing instabilities. Thus the scaling of energy confinement time with plasma current and density is improved substantially as compared to the conventional RFP. This may be numerically simulated by introducing an ad hoc electric field, adjusted to generate a tearing mode stable parallel current density profile. In the present work a current profile control algorithm, based on feedback of the fluctuating electric field in Ohm's law, is introduced into the resistive magnetohydrodynamic code DEBSP [D. D. Schnack and D. C. Baxter, J. Comput. Phys. 55, 485 (1984); D. D. Schnack, D. C. Barnes, Z. Mikic, D. S. Marneal, E. J. Caramana, and R. A. Nebel, Comput. Phys. Commun. 43, 17 (1986)]. The resulting radial magnetic field is decreased considerably, causing an increase in energy confinement time and poloidal {beta}. It is found that the parallel current density profile spontaneously becomes hollow, and that a formation, being related to persisting resistive g modes, appears close to the reversal surface.

  18. Numerical Evaluation of Fluid Mixing Phenomena in Boiling Water Reactor Using Advanced Interface Tracking Method

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Takase, Kazuyuki

    Thermal-hydraulic design of the current boiling water reactor (BWR) is performed with the subchannel analysis codes which incorporated the correlations based on empirical results including actual-size tests. Then, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. In this situation, development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason, we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, a detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. In this paper, firstly, we tried to verify the TPFIT code by comparing it with the existing 2-channel air-water mixing experimental results. Secondary, the TPFIT code was applied to simulation of steam-water two-phase flow in a model of two subchannels of a current BWRs and FLWRs rod bundle. The fluid mixing was observed at a gap between the subchannels. The existing two-phase flow correlation for fluid mixing is evaluated using detailed numerical simulation data. This data indicates that pressure difference between fluid channels is responsible for the fluid mixing, and thus the effects of the time average pressure difference and fluctuations must be incorporated in the two-phase flow correlation for fluid mixing. When inlet quality ratio of subchannels is relatively large, it is understood that evaluation precision of the existing two-phase flow correlations for fluid mixing are relatively low.

  19. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  20. On substructuring algorithms and solution techniques for the numerical approximation of partial differential equations

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.; Nicolaides, R. A.

    1986-01-01

    Substructuring methods are in common use in mechanics problems where typically the associated linear systems of algebraic equations are positive definite. Here these methods are extended to problems which lead to nonpositive definite, nonsymmetric matrices. The extension is based on an algorithm which carries out the block Gauss elimination procedure without the need for interchanges even when a pivot matrix is singular. Examples are provided wherein the method is used in connection with finite element solutions of the stationary Stokes equations and the Helmholtz equation, and dual methods for second-order elliptic equations.

  1. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  2. Challenges in Mapping Graph Exploration Algorithms on Advanced Multi-core Processors

    SciTech Connect

    Villa, Oreste; Scarpazza, Daniele P.; Petrini, Fabrizio; Fernandez-Peinador, Juan

    2007-07-17

    Numerous applications require the exploration of large graphs. The problem has been tackled in the past through a variety of solutions, either based on commodity processors or dedicated hardware. Processors based on multiple cores, like the Cell Broadband Engine (CBE), are gaining popularity as basic building blocks for high performance clusters. Nevertheless, no studies have still investigated how effectively the CBE architecture can explore large graphs, and how its performance compares with other architectural solutions. In this paper, we describe the challenges and design choices involved in mapping a breadth-first search (BFS) algorithm on the CBE. Our implementation has been driven by an accurate performance model, that has allowed seamless coordination between onchip communication, off-chip memory access, and computation. Preliminary results obtained on a pre-production prototype running at 2.4 GHz show almost linear speedups when using multiple synergistic processing units and impressive levels of performance when compared to other processors.

  3. Advanced Oil Spill Detection Algorithms For Satellite Based Maritime Environment Monitoring

    NASA Astrophysics Data System (ADS)

    Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo

    2013-12-01

    During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST

  4. Basic and Advanced Numerical Performances Relate to Mathematical Expertise but Are Fully Mediated by Visuospatial Skills

    ERIC Educational Resources Information Center

    Sella, Francesco; Sader, Elie; Lolliot, Simon; Cohen Kadosh, Roi

    2016-01-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic…

  5. Numerical simulation of steady and unsteady viscous flow in turbomachinery using pressure based algorithm

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Ho, Y.; Basson, A.

    1993-01-01

    The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake

  6. An efficient algorithm for numerical computations of continuous densities of states

    NASA Astrophysics Data System (ADS)

    Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.

    2016-06-01

    In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed

  7. Advanced Hydraulic Tomography Analysis Strategies--A Numerical Study based on Field Observations

    NASA Astrophysics Data System (ADS)

    Tso, C. M.; Yeh, T. J.

    2013-12-01

    This report presents a discussion on some of the unexplored issues pertaining to the application of hydraulic tomography to interpret pumping test data collected in the field. Using numerical experiments, we probe at a few new strategies to analyze pumping test results for multi-layer aquifers. First of all, we study the averaging of heads over packer intervals of a wellbore. How does the length of the packers reduce the resolution of the estimated hydraulic conductivity (K) field? Next we investigate the effect of using hard data (a.k.a. primary information or K measurements) conditioning on the estimated K field. Does the conditioning constrain the solution better and if so, by how much? Then we examine the effect of initial guess of K field on the inversion results. Currently, our hydraulic tomography approach (SSLE (Yeh and Liu (2000) and SimSLE (Xiang et al. (2009)) assumes a homogeneous K field as initial guess by default. What if we use a random field as initial guess? What about assigning different zones in the domain and designate different homogenous initial guess values to each of them? Finally, updating and storing the covariance matrix heavily consumes computation time during the inversion process and can sometimes be prohibiting when solving large problems. In fact, it is often the most time-consuming part of the hydraulic tomography analysis. We study the effects on the hydraulic tomography results of (1) whether updating the covariance matrix after each iteration and (2) whether storing the full matrix or diagonal terms only. The investigation outlined above will shed light on the development of more effective and reliable hydraulic tomography analysis practices and algorithms.

  8. A fast algorithm for Direct Numerical Simulation of natural convection flows in arbitrarily-shaped periodic domains

    NASA Astrophysics Data System (ADS)

    Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.

    2015-11-01

    A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.

  9. A block matching-based registration algorithm for localization of locally advanced lung tumors

    SciTech Connect

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2014-04-15

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm{sup 3}), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0

  10. A block matching-based registration algorithm for localization of locally advanced lung tumors

    PubMed Central

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2014-01-01

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm3), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%; p < 0.001). Left

  11. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-05-01

    This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was

  12. A numerical algorithm to propagate navigation error covariance matrices associated with generalized strapdown inertial measurement units

    NASA Technical Reports Server (NTRS)

    Weir, Kent A.; Wells, Eugene M.

    1990-01-01

    The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.

  13. A Nested Genetic Algorithm for the Numerical Solution of Non-Linear Coupled Equations in Water Quality Modeling

    NASA Astrophysics Data System (ADS)

    García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson

    2010-05-01

    Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in

  14. Recent advances in theoretical and numerical studies of wire array Z-pinch in the IAPCM

    SciTech Connect

    Ding, Ning Zhang, Yang Xiao, Delong Wu, Jiming Huang, Jun Yin, Li Sun, Shunkai Xue, Chuang Dai, Zihuan Ning, Cheng Shu, Xiaojian Wang, Jianguo Li, Hua

    2014-12-15

    Fast Z-pinch has produced the most powerful X-ray radiation source in laboratory and also shows the possibility to drive inertial confinement fusion (ICF). Recent advances in wire-array Z-pinch researches at the Institute of Applied Physics and Computational Mathematics are presented in this paper. A typical wire array Z-pinch process has three phases: wire plasma formation and ablation, implosion and the MRT instability development, stagnation and radiation. A mass injection model with azimuthal modulation coefficient is used to describe the wire initiation, and the dynamics of ablated plasmas of wire-array Z-pinches in (r, θ) geometry is numerically studied. In the implosion phase, a two-dimensional(r, z) three temperature radiation MHD code MARED has been developed to investigate the development of the Magneto-Rayleigh-Taylor(MRT) instability. We also analyze the implosion modes of nested wire-array and find that the inner wire-array is hardly affected before the impaction of the outer wire-array. While the plasma accelerated to high speed in the implosion stage stagnates on the axis, abundant x-ray radiation is produced. The energy spectrum of the radiation and the production mechanism are investigated. The computational x-ray pulse shows a reasonable agreement with the experimental result. We also suggest that using alloyed wire-arrays can increase multi-keV K-shell yield by decreasing the opacity of K-shell lines. In addition, we use a detailed circuit model to study the energy coupling between the generator and the Z-pinch implosion. Recently, we are concentrating on the problems of Z-pinch driven ICF, such as dynamic hohlraum and capsule implosions. Our numerical investigations on the interaction of wire-array Z-pinches on foam convertors show qualitative agreements with experimental results on the “Qiangguang I” facility. An integrated two-dimensional simulation of dynamic hohlraum driven capsule implosion provides us the physical insights of wire

  15. Recent advances in theoretical and numerical studies of wire array Z-pinch in the IAPCM

    NASA Astrophysics Data System (ADS)

    Ding, Ning; Zhang, Yang; Xiao, Delong; Wu, Jiming; Huang, Jun; Yin, Li; Sun, Shunkai; Xue, Chuang; Dai, Zihuan; Ning, Cheng; Shu, Xiaojian; Wang, Jianguo; Li, Hua

    2014-12-01

    Fast Z-pinch has produced the most powerful X-ray radiation source in laboratory and also shows the possibility to drive inertial confinement fusion (ICF). Recent advances in wire-array Z-pinch researches at the Institute of Applied Physics and Computational Mathematics are presented in this paper. A typical wire array Z-pinch process has three phases: wire plasma formation and ablation, implosion and the MRT instability development, stagnation and radiation. A mass injection model with azimuthal modulation coefficient is used to describe the wire initiation, and the dynamics of ablated plasmas of wire-array Z-pinches in (r, θ) geometry is numerically studied. In the implosion phase, a two-dimensional(r, z) three temperature radiation MHD code MARED has been developed to investigate the development of the Magneto-Rayleigh-Taylor(MRT) instability. We also analyze the implosion modes of nested wire-array and find that the inner wire-array is hardly affected before the impaction of the outer wire-array. While the plasma accelerated to high speed in the implosion stage stagnates on the axis, abundant x-ray radiation is produced. The energy spectrum of the radiation and the production mechanism are investigated. The computational x-ray pulse shows a reasonable agreement with the experimental result. We also suggest that using alloyed wire-arrays can increase multi-keV K-shell yield by decreasing the opacity of K-shell lines. In addition, we use a detailed circuit model to study the energy coupling between the generator and the Z-pinch implosion. Recently, we are concentrating on the problems of Z-pinch driven ICF, such as dynamic hohlraum and capsule implosions. Our numerical investigations on the interaction of wire-array Z-pinches on foam convertors show qualitative agreements with experimental results on the "Qiangguang I" facility. An integrated two-dimensional simulation of dynamic hohlraum driven capsule implosion provides us the physical insights of wire

  16. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-10-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  17. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-09-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  18. SOLA-DM: A numerical solution algorithm for transient three-dimensional flows

    SciTech Connect

    Wilson, T.L.; Nichols, B.D.; Hirt, C.W.; Stein, L.R.

    1988-02-01

    SOLA-DM is a three-dimensional time-explicit, finite-difference, Eulerian, fluid-dynamics computer code for solving the time-dependent incompressible Navier-Stokes equations. The solution algorithm (SOLA) evolved from the marker-and-cell (MAC) method, and the code is highly vectorized for efficient performance on a Cray computer. The computational domain is discretized by a mesh of parallelepiped cells in either cartesian or cylindrical geometry. The primary hydrodynamic variables for approximating the solution of the momentum equations are cell-face-centered velocity components and cell-centered pressures. Spatial accuracy is selected by the user to be first or second order; the time differencing is first-order accurate. The incompressibility condition results in an elliptic equation for pressure that is solved by a conjugate gradient method. Boundary conditions of five general types may be chosen: free-slip, no-slip, continuative, periodic, and specified pressure. In addition, internal mesh specifications to model obstacles and walls are provided. SOLA-DM also solves the equations for discrete particle dynamics, permitting the transport of marker particles or other solid particles through the fluid to be modeled. 7 refs., 7 figs.

  19. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    NASA Astrophysics Data System (ADS)

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sébastian, P.

    2010-06-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM® and Samcef® softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  20. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  1. A Numerical Algorithm for Determining the Contact Stress of Circular Crowned Roller Compressed between Two Flat Plates

    NASA Astrophysics Data System (ADS)

    Horng, Thin-Lin

    The main purpose of this paper is to explore a numerical algorithm for determining the contact stress when a circular crowned roller is compressed between two plates. To start with, the deformation curve on a plate surface will be derived by using the contact mechanical model. Then, the contact stress distribution along the roller which occurs on the plate surface is divided into three parts: from the center of contact to the edge, the edge and apart from the contact line. The first part is calculated by the elastic contact theorem for the contact subjected to nominal stress between non-crowned parts of roller and plates, the second part is obtained by the classical Hertzian contact solution for the contact between crowned parts of roller and plates, and the third part is simulated as exponential decay. In order to overcome the defect of the half space theorem, in which a plate with infinite thickness is assumed initially, a weighting method is introduced to find the contact stress of the plate with finite thickness. Comparisons with various finite element results indicate that the algorithm for estimating the contact stress of a circular crowned roller compressed between two plates derived in this paper can be a reasonably accurate when a heavy displacement load is applied. This is because the contact area is large under a heavy load, and the effect of stress concentration is smaller in comparison with the case under a light load.

  2. An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization.

    PubMed

    Islam, Sk Minhazul; Das, Swagatam; Ghosh, Saurav; Roy, Subhrajit; Suganthan, Ponnuthurai Nagaratnam

    2012-04-01

    Differential evolution (DE) is one of the most powerful stochastic real parameter optimizers of current interest. In this paper, we propose a new mutation strategy, a fitness-induced parent selection scheme for the binomial crossover of DE, and a simple but effective scheme of adapting two of its most important control parameters with an objective of achieving improved performance. The new mutation operator, which we call DE/current-to-gr_best/1, is a variant of the classical DE/current-to-best/1 scheme. It uses the best of a group (whose size is q% of the population size) of randomly selected solutions from current generation to perturb the parent (target) vector, unlike DE/current-to-best/1 that always picks the best vector of the entire population to perturb the target vector. In our modified framework of recombination, a biased parent selection scheme has been incorporated by letting each mutant undergo the usual binomial crossover with one of the p top-ranked individuals from the current population and not with the target vector with the same index as used in all variants of DE. A DE variant obtained by integrating the proposed mutation, crossover, and parameter adaptation strategies with the classical DE framework (developed in 1995) is compared with two classical and four state-of-the-art adaptive DE variants over 25 standard numerical benchmarks taken from the IEEE Congress on Evolutionary Computation 2005 competition and special session on real parameter optimization. Our comparative study indicates that the proposed schemes improve the performance of DE by a large magnitude such that it becomes capable of enjoying statistical superiority over the state-of-the-art DE variants for a wide variety of test problems. Finally, we experimentally demonstrate that, if one or more of our proposed strategies are integrated with existing powerful DE variants such as jDE and JADE, their performances can also be enhanced.

  3. Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, numerics and applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George Em

    2014-11-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been

  4. A matter of timing: identifying significant multi-dose radiotherapy improvements by numerical simulation and genetic algorithm search.

    PubMed

    Angus, Simon D; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means

  5. Advanced material modelling in numerical simulation of primary acetabular press-fit cup stability.

    PubMed

    Souffrant, R; Zietz, C; Fritsche, A; Kluess, D; Mittelmeier, W; Bader, R

    2012-01-01

    Primary stability of artificial acetabular cups, used for total hip arthroplasty, is required for the subsequent osteointegration and good long-term clinical results of the implant. Although closed-cell polymer foams represent an adequate bone substitute in experimental studies investigating primary stability, correct numerical modelling of this material depends on the parameter selection. Material parameters necessary for crushable foam plasticity behaviour were originated from numerical simulations matched with experimental tests of the polymethacrylimide raw material. Experimental primary stability tests of acetabular press-fit cups consisting of static shell assembly with consecutively pull-out and lever-out testing were subsequently simulated using finite element analysis. Identified and optimised parameters allowed the accurate numerical reproduction of the raw material tests. Correlation between experimental tests and the numerical simulation of primary implant stability depended on the value of interference fit. However, the validated material model provides the opportunity for subsequent parametric numerical studies.

  6. Advanced signal separation and recovery algorithms for digital x-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Mahmoud, Imbaby I.; El Tokhy, Mohamed S.

    2015-02-01

    X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and

  7. Thin film subsurface environments; Advanced X-ray spectroscopies and a novel Bayesian inference modeling algorithm

    NASA Astrophysics Data System (ADS)

    Church, Jonathan R.

    New condensed matter metrologies are being used to probe ever smaller length scales. In support of the diverse field of materials research synchrotron based spectroscopies provide sub-micron spatial resolutions and a breadth of photon wavelengths for scientific studies. For electronic materials the thinnest layers in a complementary metal-oxide-semiconductor (CMOS) device have been reduced to just a few nanometers. This raises concerns for layer uniformity, complete surface coverage, and interfacial quality. Deposition processes like chemical vapor deposition (CVD) and atomic layer deposition (ALD) have been shown to deposit the needed high-quality films for the requisite thicknesses. However, new materials beget new chemistries and, unfortunately, unwanted side-reactions and by-products. CVD/ALD tools and chemical precursors provided by our collaborators at Air Liquide utilized these new chemistries and films were deposited for which novel spectroscopic characterization methods were used. The second portion of the thesis focuses on fading and decomposing paint pigments in iconic artworks. Efforts have been directed towards understanding the micro-environments causing degradation. Hard X-ray photoelectron spectroscopy (HAXPES) and variable kinetic energy X-ray photoelectron spectroscopy (VKE-XPS) are advanced XPS techniques capable of elucidating both chemical environments and electronic band structures in sub-surface regions of electronic materials. HAXPES has been used to study the electronic band structure in a typical CMOS structure; it will be shown that unexpected band alignments are associated with the presence of electronic charges near a buried interface. Additionally, a computational modeling algorithm, Bayes-Sim, was developed to reconstruct compositional depth profiles (CDP) using VKE-XPS data sets; a subset algorithm also reconstructs CDP from angle-resolved XPS data. Reconstructed CDP produced by Bayes-Sim were most strongly correlated to the real

  8. Free Radical Addition Polymerization Kinetics without Steady-State Approximations: A Numerical Analysis for the Polymer, Physical, or Advanced Organic Chemistry Course

    ERIC Educational Resources Information Center

    Iler, H. Darrell; Brown, Amber; Landis, Amanda; Schimke, Greg; Peters, George

    2014-01-01

    A numerical analysis of the free radical addition polymerization system is described that provides those teaching polymer, physical, or advanced organic chemistry courses the opportunity to introduce students to numerical methods in the context of a simple but mathematically stiff chemical kinetic system. Numerical analysis can lead students to an…

  9. Advanced numerical methods for three dimensional two-phase flow calculations

    SciTech Connect

    Toumi, I.; Caruge, D.

    1997-07-01

    This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses an extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.

  10. Clinical accuracy of a continuous glucose monitoring system with an advanced algorithm.

    PubMed

    Bailey, Timothy S; Chang, Anna; Christiansen, Mark

    2015-03-01

    We assessed the performance of a modified Dexcom G4 Platinum system with an advanced algorithm, in comparison with frequent venous samples measured on a laboratory reference (YSI) during a clinic session and in comparison to self-monitored blood glucose (SMBG) during home use. Fifty-one subjects with diabetes were enrolled in a prospective multicenter study. Subjects wore 1 sensor for 7-day use and participated in one 12-hour in-clinic session on day 1, 4, or 7 to collect YSI reference venous glucose every 15 minutes and capillary SMBG test every 30 minutes. Carbohydrate consumption and insulin dosing and timing were manipulated to obtain data in low and high glucose ranges. In comparison with the laboratory reference method (n = 2,263) the system provided a mean and median absolute relative differences (ARD) of 9.0% and 7.0%, respectively. The mean absolute difference for CGM was 6.4 mg/dL when the YSIs were within hypoglycemia ranges (≤ 70 mg/dL). The percentage in the clinically accurate Clarke error grid A zone was 92.4% and in the benign error B zone was 7.1%. Majority of the sensors (73%) had an aggregated MARD in reference to YSI ≤ 10%. The MARD of CGM-SMBG for home use was 11.3%. The study showed that the point and rate accuracy, clinical accuracy, reliability, and consistency over the duration of wear and across glycemic ranges were superior to current commercial real-time CGM systems. The performance of this CGM is reaching that of a self-monitoring blood glucose meter in real use environment. PMID:25370149

  11. Clinical Accuracy of a Continuous Glucose Monitoring System With an Advanced Algorithm

    PubMed Central

    Bailey, Timothy S.; Chang, Anna; Christiansen, Mark

    2014-01-01

    Background: We assessed the performance of a modified Dexcom G4 Platinum system with an advanced algorithm, in comparison with frequent venous samples measured on a laboratory reference (YSI) during a clinic session and in comparison to self-monitored blood glucose (SMBG) during home use. Methods: Fifty-one subjects with diabetes were enrolled in a prospective multicenter study. Subjects wore 1 sensor for 7-day use and participated in one 12-hour in-clinic session on day 1, 4, or 7 to collect YSI reference venous glucose every 15 minutes and capillary SMBG test every 30 minutes. Carbohydrate consumption and insulin dosing and timing were manipulated to obtain data in low and high glucose ranges. Results: In comparison with the laboratory reference method (n = 2,263) the system provided a mean and median absolute relative differences (ARD) of 9.0% and 7.0%, respectively. The mean absolute difference for CGM was 6.4 mg/dL when the YSIs were within hypoglycemia ranges (≤ 70 mg/dL). The percentage in the clinically accurate Clarke error grid A zone was 92.4% and in the benign error B zone was 7.1%. Majority of the sensors (73%) had an aggregated MARD in reference to YSI ≤ 10%. The MARD of CGM-SMBG for home use was 11.3%. Conclusions: The study showed that the point and rate accuracy, clinical accuracy, reliability, and consistency over the duration of wear and across glycemic ranges were superior to current commercial real-time CGM systems. The performance of this CGM is reaching that of a self-monitoring blood glucose meter in real use environment. PMID:25370149

  12. Numerous Numerals.

    ERIC Educational Resources Information Center

    Henle, James M.

    This pamphlet consists of 17 brief chapters, each containing a discussion of a numeration system and a set of problems on the use of that system. The numeration systems used include Egyptian fractions, ordinary continued fractions and variants of that method, and systems using positive and negative bases. The book is informal and addressed to…

  13. Multi-objective optimization of combined Brayton and inverse Brayton cycles using advanced optimization algorithms

    NASA Astrophysics Data System (ADS)

    Venkata Rao, R.; Patel, Vivek

    2012-08-01

    This study explores the use of teaching-learning-based optimization (TLBO) and artificial bee colony (ABC) algorithms for determining the optimum operating conditions of combined Brayton and inverse Brayton cycles. Maximization of thermal efficiency and specific work of the system are considered as the objective functions and are treated simultaneously for multi-objective optimization. Upper cycle pressure ratio and bottom cycle expansion pressure of the system are considered as design variables for the multi-objective optimization. An application example is presented to demonstrate the effectiveness and accuracy of the proposed algorithms. The results of optimization using the proposed algorithms are validated by comparing with those obtained by using the genetic algorithm (GA) and particle swarm optimization (PSO) on the same example. Improvement in the results is obtained by the proposed algorithms. The results of effect of variation of the algorithm parameters on the convergence and fitness values of the objective functions are reported.

  14. A review of recent advances in numerical simulations of microscale fuel processor for hydrogen production

    NASA Astrophysics Data System (ADS)

    Holladay, J. D.; Wang, Y.

    2015-05-01

    Microscale (<5 W) reformers for hydrogen production have been investigated for over a decade. These devices are intended to provide hydrogen for small fuel cells. Due to the reformer's small size, numerical simulations are critical to understand heat and mass transfer phenomena occurring in the systems and help guide the further improvements. This paper reviews the development of the numerical codes and details the reaction equations used. The majority of the devices utilized methanol as the fuel due to methanol's low reforming temperature and high conversion, although, there are several methane fueled systems. The increased computational power and more complex codes have led to improved accuracy of numerical simulations. Initial models focused on the reformer, while more recently, the simulations began including other unit operations such as vaporizers, inlet manifolds, and combustors. These codes are critical for developing the next generation systems. The systems reviewed included plate reactors, microchannel reactors, and annulus reactors for both wash-coated and packed bed systems.

  15. Recent advances in numerical simulation and control of asymmetric flows around slender bodies

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.; Wong, Tin-Chee; Sharaf, Hazem H.; Liu, C. H.

    1992-01-01

    The problems of asymmetric flow around slender bodies and its control are formulated using the unsteady, compressible, thin-layer or full Navier-Stokes equations which are solved using an implicit, flux-difference splitting, finite-volume scheme. The problem is numerically simulated for both locally-conical and three-dimensional flows. The numerical applications include studies of the effects of relative incidence, Mach number and Reynolds number on the flow asymmetry. For the control of flow asymmetry, the numerical simulation cover passive and active control methods. For the passive control, the effectiveness of vertical fins placed in the leeward plane of geometric symmetry and side strakes with different orientations is studied. For the active control, the effectiveness of normal and tangential flow injection and surface heating and a combination of these methods is studied.

  16. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    ERIC Educational Resources Information Center

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  17. Advanced Algorithms and Automation Tools for Discrete Ordinates Methods in Parallel Environments

    SciTech Connect

    Alireza Haghighat

    2003-05-07

    This final report discusses major accomplishments of a 3-year project under the DOE's NEER Program. The project has developed innovative and automated algorithms, codes, and tools for solving the discrete ordinates particle transport method efficiently in parallel environments. Using a number of benchmark and real-life problems, the performance and accuracy of the new algorithms have been measured and analyzed.

  18. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1983-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.

  19. Simulations of emissivity in passive microwave remote sensing with three-dimensional numerical solutions of Maxwell equations and fast algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lin

    In the first part of the work, we developed coding for large-scale computation to solve 3-dimensional microwave scattering problem. Maxwell integral equations are solved by using MoM with RWG basis functions in conjunction with fast computation algorithms. The cost-effective solutions of parallel and distributed simulation were implemented on a low cost PC cluster, which consists of 32 processors connected to a fast Ethernet switch. More than a million of surface current unknowns were solved at unprecedented speeds. Accurate simulations of emissivities and bistatic coefficients from ocean and soil were achieved. Exponential correlation function and ocean spectrum are implementd for generating soil and ocean surfaces. They have fine scale features with large rms slope. The results were justified by comparison with numerical results from original code, which is based on pulse basis function, and from analytic methods like SPM, and also with experiments. In the second part of the work, fully polarimetric microwave emissions from wind-generated foam-covered ocean surfaces were investigated. The foam is treated as densely packed air bubbles coated with thin seawater coating. The absorption, scattering and extinction coefficients were calculated by Monte Carlo simulations of solutionsof Maxwell equations of a collection of coated particles. The effects of boundary roughness of ocean surfaces were included by using the second-order small perturbation method (SPM) describing the reflection coefficients between foam and ocean. An empirical wave-number spectrum was used to represent the small-scale wind-generated sea surfaces. The theoretical results of four Stokes brightness temperatures with typical parameters of foam in passive remote sensing at 10.8 GHz, 19.0 GHz and 36.5 GHz were illustrated. The azimuth variations of polarimetric brightness temperature were calculated. Emission with various wind speed and foam layer thickness was studied. The results were also compared

  20. A review of recent advances of numerical simulations of microscale fuel processors for hydrogen production

    SciTech Connect

    Holladay, Jamelyn D.; Wang, Yong

    2015-05-01

    Microscale (<5W) reformers for hydrogen production have been investigated for over a decade. These devices are intended to provide hydrogen for small fuel cells. Due to the reformer’s small size, numerical simulations are critical to understand heat and mass transfer phenomena occurring in the systems. This paper reviews the development of the numerical codes and details the reaction equations used. The majority of the devices utilized methanol as the fuel due to methanol’s low reforming temperature and high conversion, although, there are several methane fueled systems. As computational power has decreased in cost and increased in availability, the codes increased in complexity and accuracy. Initial models focused on the reformer, while more recently, the simulations began including other unit operations such as vaporizers, inlet manifolds, and combustors. These codes are critical for developing the next generation systems. The systems reviewed included, plate reactors, microchannel reactors, annulus reactors, wash-coated, packed bed systems.

  1. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    NASA Technical Reports Server (NTRS)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  2. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  3. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    PubMed

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology. PMID:27273293

  4. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  5. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank-Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  6. Advanced Techniques for Seismic Protection of Historical Buildings: Experimental and Numerical Approach

    SciTech Connect

    Mazzolani, Federico M.

    2008-07-08

    The seismic protection of historical and monumental buildings, namely dating back from the ancient age up to the 20th Century, is being looked at with greater and greater interest, above all in the Euro-Mediterranean area, its cultural heritage being strongly susceptible to undergo severe damage or even collapse due to earthquake. The cultural importance of historical and monumental constructions limits, in many cases, the possibility to upgrade them from the seismic point of view, due to the fear of using intervention techniques which could have detrimental effects on their cultural value. Consequently, a great interest is growing in the development of sustainable methodologies for the use of Reversible Mixed Technologies (RMTs) in the seismic protection of the existing constructions. RMTs, in fact, are conceived for exploiting the peculiarities of innovative materials and special devices, and they allow ease of removal when necessary. This paper deals with the experimental and numerical studies, framed within the EC PROHITECH research project, on the application of RMTs to the historical and monumental constructions mainly belonging to the cultural heritage of the Euro-Mediterranean area. The experimental tests and the numerical analyses are carried out at five different levels, namely full scale models, large scale models, sub-systems, devices, materials and elements.

  7. Numerical Study on Crossflow Printed Circuit Heat Exchanger for Advanced Small Modular Reactors

    SciTech Connect

    Yoon, Su-Jong; Sabharwall, Piyush; Kim, Eung-Soo

    2014-03-01

    Various fluids such as water, gases (helium), molten salts (FLiNaK, FLiBe) and liquid metal (sodium) are used as a coolant of advanced small modular reactors (SMRs). The printed circuit heat exchanger (PCHE) has been adopted as the intermediate and/or secondary heat exchanger of SMR systems because this heat exchanger is compact and effective. The size and cost of PCHE can be changed by the coolant type of each SMR. In this study, the crossflow PCHE analysis code for advanced small modular reactor has been developed for the thermal design and cost estimation of the heat exchanger. The analytical solution of single pass, both unmixed fluids crossflow heat exchanger model was employed to calculate a two dimensional temperature profile of a crossflow PCHE. The analytical solution of crossflow heat exchanger was simply implemented by using built in function of the MATLAB program. The effect of fluid property uncertainty on the calculation results was evaluated. In addition, the effect of heat transfer correlations on the calculated temperature profile was analyzed by taking into account possible combinations of primary and secondary coolants in the SMR systems. Size and cost of heat exchanger were evaluated for the given temperature requirement of each SMR.

  8. Numerical modelling of the groundwater inflow to an advancing open pit mine: Kolahdarvazeh pit, Central Iran.

    PubMed

    Bahrami, Saeed; Doulati Ardejani, Faramarz; Aslani, Soheyla; Baafi, Ernest

    2014-12-01

    The groundwater inflow into a mine during its life and after ceasing operations is one of the most important concerns of the mining industry. This paper presents a hydrogeological assessment of the Irankuh Zn-Pb mine at 20 km south of Esfahan and 1 km northeast of Abnil in west-Central Iran. During mine excavation, the upper impervious bed of a confined aquifer was broken and water at high-pressure flowed into an open pit mine associated with the Kolahdarvazeh deposit. The inflow rates were 6.7 and 1.4 m(3)/s at the maximum and minimum quantities, respectively. Permeability, storage coefficient, thickness and initial head of the fully saturated confined aquifer were 3.5 × 10(-4) m/s, 0.2, 30 m and 60 m, respectively. The hydraulic heads as a function of time were monitored at four observation wells in the vicinity of the pit over 19 weeks and at an observation well near a test well over 21 h. In addition, by measuring the rate of pumping out from the pit sump, at a constant head (usually equal to height of the pit floor), the real inflow rates to the pit were monitored. The main innovations of this work were to make comparison between numerical modelling using a finite element software called SEEP/W and actual data related to inflow and extend the applicability of the numerical model. This model was further used to estimate the hydraulic heads at the observation wells around the pit over 19 weeks during mining operations. Data from a pump-out test and observation wells were used for model calibration and verification. In order to evaluate the model efficiency, the modelling results of inflow quantity and hydraulic heads were compared to those from analytical solutions, as well as the field data. The mean percent error in relation to field data for the inflow quantity was 0.108. It varied between 1.16 and 1.46 for hydraulic head predictions, which are much lower values than the mean percent errors resulted from the analytical solutions (from 1.8 to 5

  9. Numerical modelling of the groundwater inflow to an advancing open pit mine: Kolahdarvazeh pit, Central Iran.

    PubMed

    Bahrami, Saeed; Doulati Ardejani, Faramarz; Aslani, Soheyla; Baafi, Ernest

    2014-12-01

    The groundwater inflow into a mine during its life and after ceasing operations is one of the most important concerns of the mining industry. This paper presents a hydrogeological assessment of the Irankuh Zn-Pb mine at 20 km south of Esfahan and 1 km northeast of Abnil in west-Central Iran. During mine excavation, the upper impervious bed of a confined aquifer was broken and water at high-pressure flowed into an open pit mine associated with the Kolahdarvazeh deposit. The inflow rates were 6.7 and 1.4 m(3)/s at the maximum and minimum quantities, respectively. Permeability, storage coefficient, thickness and initial head of the fully saturated confined aquifer were 3.5 × 10(-4) m/s, 0.2, 30 m and 60 m, respectively. The hydraulic heads as a function of time were monitored at four observation wells in the vicinity of the pit over 19 weeks and at an observation well near a test well over 21 h. In addition, by measuring the rate of pumping out from the pit sump, at a constant head (usually equal to height of the pit floor), the real inflow rates to the pit were monitored. The main innovations of this work were to make comparison between numerical modelling using a finite element software called SEEP/W and actual data related to inflow and extend the applicability of the numerical model. This model was further used to estimate the hydraulic heads at the observation wells around the pit over 19 weeks during mining operations. Data from a pump-out test and observation wells were used for model calibration and verification. In order to evaluate the model efficiency, the modelling results of inflow quantity and hydraulic heads were compared to those from analytical solutions, as well as the field data. The mean percent error in relation to field data for the inflow quantity was 0.108. It varied between 1.16 and 1.46 for hydraulic head predictions, which are much lower values than the mean percent errors resulted from the analytical solutions (from 1.8 to 5

  10. Algorithm for quantifying advanced carotid artery atherosclerosis in humans using MRI and active contours

    NASA Astrophysics Data System (ADS)

    Adams, Gareth; Vick, G. W., III; Bordelon, Cassius; Insull, William; Morrisett, Joel

    2002-05-01

    A new algorithm for measuring carotid artery volumes and estimating atherosclerotic plaque volumes from MRI images has been developed and validated using pressure-perfusion-fixed cadaveric carotid arteries. Our method uses an active contour algorithm with the generalized gradient vector field force as the external force to localize the boundaries of the artery on each MRI cross-section. Plaque volume is estimated by an automated algorithm based on estimating the normal wall thickness for each branch of the carotid. Triplicate volume measurements were performed by a single observer on thirty-eight pairs of cadaveric carotid arteries. The coefficient of variance (COV) was used to quantify measurement reproducibility. Aggregate volumes were computed for nine contiguous slices bounding the carotid bifurcation. The median (mean +/- SD) COV for the 76 aggregate arterial volumes was 0.93% (1.47% +/- 1.52%) for the lumen volume, 0.95% (1.06% +/- 0.67%) for the total artery volume, and 4.69% (5.39% +/- 3.97%) for the plaque volume. These results indicate that our algorithm provides repeatable measures of arterial volumes and a repeatable estimate of plaque volume of cadaveric carotid specimens through analysis of MRI images. The algorithm also significantly decreases the amount of time necessary to generate these measurements.

  11. Direct Numerical Simulation of Acoustic Waves Interacting with a Shock Wave in a Quasi-1D Convergent-Divergent Nozzle Using an Unstructured Finite Volume Algorithm

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.; Mankbadi, Reda R.

    1995-01-01

    Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.

  12. Evaluation of Temperature Gradient in Advanced Automated Directional Solidification Furnace (AADSF) by Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.

    1996-01-01

    A numerical model of heat transfer using combined conduction, radiation and convection in AADSF was used to evaluate temperature gradients in the vicinity of the crystal/melt interface for variety of hot and cold zone set point temperatures specifically for the growth of mercury cadmium telluride (MCT). Reverse usage of hot and cold zones was simulated to aid the choice of proper orientation of crystal/melt interface regarding residual acceleration vector without actual change of furnace location on board the orbiter. It appears that an additional booster heater will be extremely helpful to ensure desired temperature gradient when hot and cold zones are reversed. Further efforts are required to investigate advantages/disadvantages of symmetrical furnace design (i.e. with similar length of hot and cold zones).

  13. Advanced friction simulation of standardized friction tests: a numerical and experimental demonstrator

    NASA Astrophysics Data System (ADS)

    Hol, J.; Wiebenga, J. H.; Hörning, M.; Dietrich, F.; Dane, C.

    2016-08-01

    For the characterization of friction conditions under sheet metal forming process conditions, different friction test set-ups are being used in industry. However, different friction tests and test set-ups are known to result in scattering friction results. In this work, the TriboForm software is utilized to numerically model the frictional behavior. The simulated coefficients of friction are experimentally validated using friction results from a standardized strip drawing friction test set-up. The experimental and simulation results of the friction behavior show a good overall agreement. This demonstrates that the TriboForm software enables simulating friction conditions for varying tribology conditions, i.e. resulting in a generally applicable approach for friction characterization under industrial sheet metal forming process conditions.

  14. Numerical simulation of the reactive flow in advanced (HSR) combustors using KIVA-2

    NASA Technical Reports Server (NTRS)

    Winowich, Nicholas S.

    1991-01-01

    Recent work has been done with the goal of establishing ultralow emission aircraft gas turbine combustors. A significant portion of the effort is the development of three dimensional computational combustor models. The KIVA-II computer code which is based on the Implicit Continuous Eulerian Difference mesh Arbitrary Lagrangian Eulerian (ICED-ALE) numerical scheme is one of the codes selected by NASA to achieve these goals. This report involves a simulation of jet injection through slanted slots within the Rich burn/Quick quench/Lean burn (RQL) baseline experimental rig. The RQL combustor distinguishes three regions of combustion. This work specifically focuses on modeling the quick quench mixer region in which secondary injection air is introduced radially through 12 equally spaced slots around the mixer circumference. Steady state solutions are achieved with modifications to the KIVA-II program. Work currently underway will evaluate thermal mixing as a function of injection air velocity and angle of inclination of the slots.

  15. A numerical investigation on the efficiency of range extending systems using Advanced Vehicle Simulator

    NASA Astrophysics Data System (ADS)

    Varnhagen, Scott; Same, Adam; Remillard, Jesse; Park, Jae Wan

    2011-03-01

    Series plug-in hybrid electric vehicles of varying engine configuration and battery capacity are modeled using Advanced Vehicle Simulator (ADVISOR). The performance of these vehicles is analyzed on the bases of energy consumption and greenhouse gas emissions on the tank-to-wheel and well-to-wheel paths. Both city and highway driving conditions are considered during the simulation. When simulated on the well-to-wheel path, it is shown that the range extender with a Wankel rotary engine consumes less energy and emits fewer greenhouse gases compared to the other systems with reciprocating engines during many driving cycles. The rotary engine has a higher power-to-weight ratio and lower noise, vibration and harshness compared to conventional reciprocating engines, although performs less efficiently. The benefits of a Wankel engine make it an attractive option for use as a range extender in a plug-in hybrid electric vehicle.

  16. [Adequacy of clinical interventions in patients with advanced and complex disease. Proposal of a decision making algorithm].

    PubMed

    Ameneiros-Lago, E; Carballada-Rico, C; Garrido-Sanjuán, J A; García Martínez, A

    2015-01-01

    Decision making in the patient with chronic advanced disease is especially complex. Health professionals are obliged to prevent avoidable suffering and not to add any more damage to that of the disease itself. The adequacy of the clinical interventions consists of only offering those diagnostic and therapeutic procedures appropriate to the clinical situation of the patient and to perform only those allowed by the patient or representative. In this article, the use of an algorithm is proposed that should serve to help health professionals in this decision making process.

  17. Low Cost Design of an Advanced Encryption Standard (AES) Processor Using a New Common-Subexpression-Elimination Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Chih; Hsiao, Shen-Fu

    In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.

  18. Science-Based Approach for Advancing Marine and Hydrokinetic Energy: Integrating Numerical Simulations with Experiments

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Kang, S.; Chamorro, L. P.; Hill, C.

    2011-12-01

    The field of MHK energy is still in its infancy lagging approximately a decade or more behind the technology and development progress made in wind energy engineering. Marine environments are characterized by complex topography and three-dimensional (3D) turbulent flows, which can greatly affect the performance and structural integrity of MHK devices and impact the Levelized Cost of Energy (LCoE). Since the deployment of multi-turbine arrays is envisioned for field applications, turbine-to-turbine interactions and turbine-bathymetry interactions need to be understood and properly modeled so that MHK arrays can be optimized on a site specific basis. Furthermore, turbulence induced by MHK turbines alters and interacts with the nearby ecosystem and could potentially impact aquatic habitats. Increased turbulence in the wake of MHK devices can also change the shear stress imposed on the bed ultimately affecting the sediment transport and suspension processes in the wake of these structures. Such effects, however, remain today largely unexplored. In this work a science-based approach integrating state-of-the-art experimentation with high-resolution computational fluid dynamics is proposed as a powerful strategy for optimizing the performance of MHK devices and assessing environmental impacts. A novel numerical framework is developed for carrying out Large-Eddy Simulation (LES) in arbitrarily complex domains with embedded MHK devices. The model is able to resolve the geometrical complexity of real-life MHK devices using the Curvilinear Immersed Boundary (CURVIB) method along with a wall model for handling the flow near solid surfaces. Calculations are carried out for an axial flow hydrokinetic turbine mounted on the bed of rectangular open channel on a grid with nearly 200 million grid nodes. The approach flow corresponds to fully developed turbulent open channel flow and is obtained from a separate LES calculation. The specific case corresponds to that studied

  19. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations

    PubMed Central

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-01-01

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

  20. A review on recent advances in numerical modelling of bone cutting.

    PubMed

    Marco, Miguel; Rodríguez-Millán, Marcos; Santiuste, Carlos; Giner, Eugenio; Henar Miguélez, María

    2015-04-01

    Common practice of surgical treatments in orthopaedics and traumatology involves cutting processes of bone. These operations introduce risk of thermo-mechanical damage, since the threshold of critical temperature producing thermal osteonecrosis is very low. Therefore, it is important to develop predictive tools capable of simulating accurately the increase of temperature during bone cutting, being the modelling of these processes still a challenge. In addition, the prediction of cutting forces and mechanical damage is also important during machining operations. As the accuracy of simulations depends greatly on the proper choice of the thermo-mechanical properties, an essential part of the numerical model is the constitutive behaviour of the bone tissue, which is considered in different ways in the literature. This paper focuses on the review of the main contributions in modelling of bone cutting with special attention to the bone mechanical behaviour. The aim is to give the reader a complete vision of the approaches commonly presented in the literature in order to help in the development of accurate models for bone cutting. PMID:25676359

  1. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations.

    PubMed

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-08-13

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs.

  2. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations.

    PubMed

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-08-13

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

  3. Numerical Simulations of Optical Turbulence Using an Advanced Atmospheric Prediction Model: Implications for Adaptive Optics Design

    NASA Astrophysics Data System (ADS)

    Alliss, R.

    2014-09-01

    Optical turbulence (OT) acts to distort light in the atmosphere, degrading imagery from astronomical telescopes and reducing the data quality of optical imaging and communication links. Some of the degradation due to turbulence can be corrected by adaptive optics. However, the severity of optical turbulence, and thus the amount of correction required, is largely dependent upon the turbulence at the location of interest. Therefore, it is vital to understand the climatology of optical turbulence at such locations. In many cases, it is impractical and expensive to setup instrumentation to characterize the climatology of OT, so numerical simulations become a less expensive and convenient alternative. The strength of OT is characterized by the refractive index structure function Cn2, which in turn is used to calculate atmospheric seeing parameters. While attempts have been made to characterize Cn2 using empirical models, Cn2 can be calculated more directly from Numerical Weather Prediction (NWP) simulations using pressure, temperature, thermal stability, vertical wind shear, turbulent Prandtl number, and turbulence kinetic energy (TKE). In this work we use the Weather Research and Forecast (WRF) NWP model to generate Cn2 climatologies in the planetary boundary layer and free atmosphere, allowing for both point-to-point and ground-to-space seeing estimates of the Fried Coherence length (ro) and other seeing parameters. Simulations are performed using a multi-node linux cluster using the Intel chip architecture. The WRF model is configured to run at 1km horizontal resolution and centered on the Mauna Loa Observatory (MLO) of the Big Island. The vertical resolution varies from 25 meters in the boundary layer to 500 meters in the stratosphere. The model top is 20 km. The Mellor-Yamada-Janjic (MYJ) TKE scheme has been modified to diagnose the turbulent Prandtl number as a function of the Richardson number, following observations by Kondo and others. This modification

  4. Advancing Satellite-Based Flood Prediction in Complex Terrain Using High-Resolution Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.; Nikolopoulos, E. I.; Bartsotas, N. S.

    2015-12-01

    Floods constitute one of the most significant and frequent natural hazard in mountainous regions. Satellite-based precipitation products offer in many cases the only available source of QPE. However, satellite-based QPE over complex terrain suffer from significant bias that limits their applicability for hydrologic modeling. In this work we investigate the potential of a new correction procedure, which involves the use of high-resolution numerical weather prediction (NWP) model simulations to adjust satellite QPE. Adjustment is based on the pdf matching of satellite and NWP (used as reference) precipitation distribution. The impact of correction procedure on simulating the hydrologic response is examined for 15 storm events that generated floods over the mountainous Upper Adige region of Northern Italy. Atmospheric simulations were performed at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS). The proposed error correction procedure was then applied on the widely used TRMM 3B42 satellite precipitation product and the evaluation of the correction was based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2) available in the study area. Satellite QPE, before and after correction, are used to simulate flood response using ARFFS (Adige River Flood Forecasting System), a semi-distributed hydrologic model, which is used for operational flood forecasting in the region. Results showed that bias in satellite QPE before correction was significant and had a tremendous impact on the simulation of flood peak, however the correction procedure was able to reduce bias in QPE and therefore improve considerably the simulated flood hydrograph.

  5. Implementation of advanced feedback control algorithms for controlled resonant magnetic perturbation physics studies on EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Frassinetti, L.; Olofsson, K. E. J.; Brunsell, P. R.; Drake, J. R.

    2011-06-01

    The EXTRAP T2R feedback system (active coils, sensor coils and controller) is used to study and develop new tools for advanced control of the MHD instabilities in fusion plasmas. New feedback algorithms developed in EXTRAP T2R reversed-field pinch allow flexible and independent control of each magnetic harmonic. Methods developed in control theory and applied to EXTRAP T2R allow a closed-loop identification of the machine plant and of the resistive wall modes growth rates. The plant identification is the starting point for the development of output-tracking algorithms which enable the generation of external magnetic perturbations. These algorithms will then be used to study the effect of a resonant magnetic perturbation (RMP) on the tearing mode (TM) dynamics. It will be shown that the stationary RMP can induce oscillations in the amplitude and jumps in the phase of the rotating TM. It will be shown that the RMP strongly affects the magnetic island position.

  6. Advancing Efficient All-Electron Electronic Structure Methods Based on Numeric Atom-Centered Orbitals for Energy Related Materials

    NASA Astrophysics Data System (ADS)

    Blum, Volker

    This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.

  7. Numerical Viscous Flow Analysis of an Advanced Semispan Diamond-Wing Model at High-Life Conditions

    NASA Technical Reports Server (NTRS)

    Ghaffari, F.; Biedron, R. T.; Luckring, J. M.

    2002-01-01

    Turbulent Navier-Stokes computational results are presented for an advanced diamond wing semispan model at low speed, high-lift conditions. The numerical results are obtained in support of a wind-tunnel test that was conducted in the National Transonic Facility (NTF) at the NASA Langley Research Center. The model incorporated a generic fuselage and was mounted on the tunnel sidewall using a constant width standoff. The analyses include: (1) the numerical simulation of the NTF empty, tunnel flow characteristics; (2) semispan high-lift model with the standoff in the tunnel environment; (3) semispan high-lift model with the standoff and viscous sidewall in free air; and (4) semispan high-lift model without the standoff in free air. The computations were performed at conditions that correspond to a nominal approach and landing configuration. The wing surface pressure distributions computed for the model in both the tunnel and in free air agreed well with the corresponding experimental data and they both indicated small increments due to the wall interference effects. However, the wall interference effects were found to be more pronounced in the total measured and the computed lift, drag and pitching moment due to standard induced up-flow effects. Although the magnitudes of the computed forces and moment were slightly off compared to the measured data, the increments due the wall interference effects were predicted well. The numerical predictions are also presented on the combined effects of the tunnel sidewall boundary layer and the standoff geometry on the fuselage fore-body pressure distributions and the resulting impact on the overall configuration longitudinal aerodynamic characteristics.

  8. Classical two-dimensional numerical algorithm for ?-Induced charge carrier advection-diffusion in Medipix-3 silicon pixel detectors

    NASA Astrophysics Data System (ADS)

    Biamonte, Mason; Idarraga, John

    2013-04-01

    A classical hybrid alternating-direction implicit difference scheme is used to simulate two-dimensional charge carrier advection-diffusion induced by alpha particles incident upon silicon pixel detectors at room temperature in vacuum. A mapping between the results of the simulation and a projection of the cluster size for each incident alpha is constructed. The error between the simulation and the experimental data diminishes with the increase in the applied voltage for the pixels in the central region of the cluster. Simulated peripheral pixel TOT values do not match the data for any value of applied voltage, suggesting possible modifications to the current algorithm from first principles. Coulomb repulsion between charge carriers is built into the algorithm using the Barnes-Hut tree algorithm. The plasma effect arising from the initial presence of holes in the silicon is incorporated into the simulation. The error between the simulation and the data helps identify physics not accounted for in standard literature simulation techniques.

  9. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    NASA Technical Reports Server (NTRS)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  10. AMJoin: An Advanced Join Algorithm for Multiple Data Streams Using a Bit-Vector Hash Table

    NASA Astrophysics Data System (ADS)

    Kwon, Tae-Hyung; Kim, Hyeon-Gyu; Kim, Myoung-Ho; Son, Jin-Hyun

    A multiple stream join is one of the most important but high cost operations in ubiquitous streaming services. In this paper, we propose a newly improved and practical algorithm for joining multiple streams called AMJoin, which improves the multiple join performance by guaranteeing the detection of join failures in constant time. To achieve this goal, we first design a new data structure called BiHT (Bit-vector Hash Table) and present the overall behavior of AMJoin in detail. In addition, we show various experimental results and their analyses for clarifying its efficiency and practicability.

  11. LDSS-P: an advanced algorithm to extract functional short motifs associated with coordinated gene expression

    PubMed Central

    Ichida, Hiroyuki; Long, Sharon R.

    2016-01-01

    Identifying functional elements in promoter sequences is a major goal in computational and experimental genome biology. Here, we describe an algorithm, Local Distribution of Short Sequences for Prokaryotes (LDSS-P), to identify conserved short motifs located at specific positions in the promoters of co-expressed prokaryotic genes. As a test case, we applied this algorithm to a symbiotic nitrogen-fixing bacterium, Sinorhizobium meliloti. The LDSS-P profiles that overlap with the 5′ section of the extracytoplasmic function RNA polymerase sigma factor RpoE2 consensus sequences displayed a sharp peak between -34 and -32 from TSS positions. The corresponding genes overlap significantly with RpoE2 targets identified from previous experiments. We further identified several groups of genes that are co-regulated with characterized marker genes. Our data indicate that in S. meliloti, and possibly in other Rhizobiaceae species, the master cell cycle regulator CtrA may recognize an expanded motif (AACCAT), which is positionally shifted from the previously reported CtrA consensus sequence in Caulobacter crescentus. Bacterial one-hybrid experiments showed that base substitution in the expanded motif either increase or decrease the binding by CtrA. These results show the effectiveness of LDSS-P as a method to delineate functional promoter elements. PMID:27190233

  12. LDSS-P: an advanced algorithm to extract functional short motifs associated with coordinated gene expression.

    PubMed

    Ichida, Hiroyuki; Long, Sharon R

    2016-06-20

    Identifying functional elements in promoter sequences is a major goal in computational and experimental genome biology. Here, we describe an algorithm, Local Distribution of Short Sequences for Prokaryotes (LDSS-P), to identify conserved short motifs located at specific positions in the promoters of co-expressed prokaryotic genes. As a test case, we applied this algorithm to a symbiotic nitrogen-fixing bacterium, Sinorhizobium meliloti The LDSS-P profiles that overlap with the 5' section of the extracytoplasmic function RNA polymerase sigma factor RpoE2 consensus sequences displayed a sharp peak between -34 and -32 from TSS positions. The corresponding genes overlap significantly with RpoE2 targets identified from previous experiments. We further identified several groups of genes that are co-regulated with characterized marker genes. Our data indicate that in S. meliloti, and possibly in other Rhizobiaceae species, the master cell cycle regulator CtrA may recognize an expanded motif (AACCAT), which is positionally shifted from the previously reported CtrA consensus sequence in Caulobacter crescentus Bacterial one-hybrid experiments showed that base substitution in the expanded motif either increase or decrease the binding by CtrA. These results show the effectiveness of LDSS-P as a method to delineate functional promoter elements. PMID:27190233

  13. Advancing algorithms, pipelines, and end-user control for analyzing billions of microbial sequences

    PubMed Central

    Gonzalez, Antonio; Knight, Rob

    2012-01-01

    The vast number of microbial sequences resulting from sequencing efforts using new technologies require us to re-assess currently available analysis methodologies and tools. Here we describe trends in the development and distribution of software for analyzing microbial sequence data. We then focus on one widely used set of methods, dimensionality reduction techniques, which allow users to summarize and compare these vast datasets. We conclude by emphasizing the utility of formal software engineering methods for development of computational biology tools, and the need for new algorithms for comparing microbial communities. Such large-scale comparisons will allow us to fulfill the dream of rapid integration and comparison of microbial sequence data sets, in a replicable analytical environment, in order to describe the microbial world we inhabit. PMID:22172529

  14. Facilitation of Third-party Development of Advanced Algorithms for Explosive Detection Using Workshops and Grand Challenges

    SciTech Connect

    Martz, H E; Crawford, C R; Beaty, J S; Castanon, D

    2011-02-15

    The US Department of Homeland Security (DHS) has requirements for future explosive detection scanners that include dealing with a larger number of threats, higher probability of detection, lower false alarm rates and lower operating costs. One tactic that DHS is pursuing to achieve these requirements is to augment the capabilities of the established security vendors with third-party algorithm developers. The purposes of this presentation are to review DHS's objectives for involving third parties in the development of advanced algorithms and then to discuss how these objectives are achieved using workshops and grand challenges. Terrorists are still trying and they are getting more sophisticated. There is a need to increase the number of smart people working on homeland security. Augmenting capabilities and capacities of system vendors with third-parties is one tactic. Third parties can be accessed via workshops and grand challenges. Successes have been achieved to date. There are issues that need to be resolved to further increase third party involvement.

  15. Numerically efficient angle, width, offset, and discontinuity determination of straight lines by the discrete Fourier-bilinear transformation algorithm.

    PubMed

    Lou, X M; Hassebrook, L G; Lhamon, M E; Li, J

    1997-01-01

    We introduce a new method for determining the number of straight lines, line angles, offsets, widths, and discontinuities in complicated images. In this method, line angles are obtained by searching the peaks of a hybrid discrete Fourier and bilinear transformed line angle spectrum. Numerical advantages and performance are demonstrated.

  16. Algorithmic implementations of domain decomposition methods for the diffraction simulation of advanced photomasks

    NASA Astrophysics Data System (ADS)

    Adam, Konstantinos; Neureuther, Andrew R.

    2002-07-01

    The domain decomposition method developed in [1] is examined in more detail. This method enables rapid computer simulation of advanced photomask (alt. PSM, masks with OPC) scattering and transmission properties. Compared to 3D computer simulation, speed-up factors of approximately 400, and up to approximately 200,000 when using the look-up table approach, are possible. Combined with the spatial frequency properties of projection printing systems, it facilitates accurate computer simulation of the projected image (normalized mean square error of a typical image is only a fraction of 1%). Some esoteric accuracy issues of the method are addressed and the way to handle arbitrary, Manhattan-type mask layouts is presented. The method is shown to be valid for off-axis incidence. The cross-talk model developed in [1] is used in 3D mask simulations (2D layouts).

  17. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  18. Middle atmosphere project: A radiative heating and cooling algorithm for a numerical model of the large scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    Wehrbein, W. M.; Leovy, C. B.

    1981-01-01

    A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.

  19. A Numerical Algorithm to Calculate the Pressure Distribution of the TPS Front End Due to Desorption Induced by Synchrotron Radiation

    SciTech Connect

    Sheng, I. C.; Kuan, C. K.; Chen, Y. T.; Yang, J. Y.; Hsiung, G. Y.; Chen, J. R.

    2010-06-23

    The pressure distribution is an important aspect of a UHV subsystem in either a storage ring or a front end. The design of the 3-GeV, 400-mA Taiwan Photon Source (TPS) foresees outgassing induced by photons and due to a bending magnet and an insertion device. An algorithm to calculate the photon-stimulated absorption (PSD) due to highly energetic radiation from a synchrotron source is presented. Several results using undulator sources such as IU20 are also presented, and the pressure distribution is illustrated.

  20. Numerical analysis of second harmonic generation for THz-wave in a photonic crystal waveguide using a nonlinear FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Saito, Kyosuke; Tanabe, Tadao; Oyama, Yutaka

    2016-04-01

    We have presented a numerical analysis to describe the behavior of a second harmonic generation (SHG) in THz regime by taking into account for both linear and nonlinear optical susceptibility. We employed a nonlinear finite-difference-time-domain (nonlinear FDTD) method to simulate SHG output characteristics in THz photonic crystal waveguide based on semi insulating gallium phosphide crystal. Unique phase matching conditions originated from photonic band dispersions with low group velocity are appeared, resulting in SHG output characteristics. This numerical study provides spectral information of SHG output in THz PC waveguide. THz PC waveguides is one of the active nonlinear optical devices in THz regime, and nonlinear FDTD method is a powerful tool to design photonic nonlinear THz devices.

  1. Solutions of the Two Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    Leblanc, James

    In this talk we present numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice. In order to provide an assessment of our ability to compute accurate results in the thermodynamic limit we employ numerous methods including auxiliary field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock. We illustrate cases where agreement between different methods is obtained in order to establish benchmark results that should be useful in the validation of future results.

  2. A block iterative finite element algorithm for numerical solution of the steady-state, compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.

    1976-01-01

    An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.

  3. A high-order numerical algorithm for DNS of low-Mach-number reactive flows with detailed chemistry and quasi-spectral accuracy

    NASA Astrophysics Data System (ADS)

    Motheau, E.; Abraham, J.

    2016-05-01

    A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.

  4. Advanced order management in ERM systems: the tic-tac-toe algorithm

    NASA Astrophysics Data System (ADS)

    Badell, Mariana; Fernandez, Elena; Puigjaner, Luis

    2000-10-01

    The concept behind improved enterprise resource planning systems (ERP) systems is the overall integration of the whole enterprise functionality into the management systems through financial links. Converting current software into real management decision tools requires crucial changes in the current approach to ERP systems. This evolution must be able to incorporate the technological achievements both properly and in time. The exploitation phase of plants needs an open web-based environment for collaborative business-engineering with on-line schedulers. Today's short lifecycles of products and processes require sharp and finely tuned management actions that must be guided by scheduling tools. Additionally, such actions must be able to keep track of money movements related to supply chain events. Thus, the necessary outputs require financial-production integration at the scheduling level as proposed in the new approach of enterprise management systems (ERM). Within this framework, the economical analysis of the due date policy and its optimization become essential to manage dynamically realistic and optimal delivery dates with price-time trade-off during the marketing activities. In this work we propose a scheduling tool with web-based interface conducted by autonomous agents when precise economic information relative to plant and business actions and their effects are provided. It aims to attain a better arrangement of the marketing and production events in order to face the bid/bargain process during e-commerce. Additionally, management systems require real time execution and an efficient transaction-oriented approach capable to dynamically adopt realistic and optimal actions to support marketing management. To this end the TicTacToe algorithm provides sequence optimization with acceptable tolerances in realistic time.

  5. Numerical and structural aberrations in advanced neuroblastoma tumours by CGH analysis; survival correlates with chromosome 17 status

    PubMed Central

    Cunsolo, C Lo; Bicocchi, M P; Petti, A R; Tonini, G P

    2000-01-01

    Rapid tumour progression in neuroblastoma is associated with MYCN amplification, deletion of the short arm of chromosome 1 and gain of 17q. However, patients with advanced disease without MYCN amplification and/or 1p deletion have a very poor outcome too, which suggests other genetic defects may predict an unfavourable prognosis. We employed CGH to study 22 tumours of patients at stages 3 and 4 over one year of age (6 and 16 cases respectively). Patients were divided in groups (A) long-term survivors and (B) short-term survivors. CGH showed a total of 226 chromosome imbalances (110 in group A and 116 in group B). The neuroblastoma cells of long-term survivors showed a preponderance of numerical aberrations (54%vs 43%); particularly gains of entire chromosomes 1 (P< 0.03), 7 (P< 0.04) and 19 (P< 0.05). An extra copy of 17 was detected in 6/8 (75%) samples of group A and only 1/14 (7%) samples of group B (P< 0.002). Conversely, tumours of patients who died from disease progression displayed a higher frequency of structural abnormalities (43%vs 35%), including loss of 1p, 9p, 11q, 15q and 18q and gain of 12q, although the difference was not significant (P= 0.24). Unbalanced gain of 17q was detected in 8/14 (57%) tumours of group B and only 1/8 (13%) tumours of group A (P< 0.05). The peculiar genetic difference observed in the tumours of long and short-term survivors may have prognostic relevance. © 2000 Cancer Research Campaign PMID:11044353

  6. Numerical dispersion, stability, and phase-speed for 3D time-domain finite-difference seismic wave propagation algorithms

    NASA Astrophysics Data System (ADS)

    Haney, M. M.; Aldridge, D. F.; Symons, N. P.

    2005-12-01

    Numerical solution of partial differential equations by explicit, time-domain, finite-difference (FD) methods entails approximating temporal and spatial derivatives by discrete function differences. Thus, the solution of the difference equation will not be identical to the solution of the underlying differential equation. Solution accuracy degrades if temporal and spatial gridding intervals are too large. Overly coarse spatial gridding leads to spurious artifacts in the calculated results referred to as numerical dispersion, whereas coarse temporal sampling may produce numerical instability (manifest as unbounded growth in the calculations as FD timestepping proceeds). Quantitative conditions for minimizing dispersion and avoiding instability are developed by deriving the dispersion relation appropriate for the discrete difference equation (or coupled system of difference equations) under examination. A dispersion relation appropriate for FD solution of the 3D velocity-stress system of isotropic elastodynamics, on staggered temporal and spatial grids, is developed. The relation applies to either compressional or shear wave propagation, and reduces to the proper form for acoustic propagation in the limit of vanishing shear modulus. A stability condition and a plane-wave phase-speed formula follow as consequences of the dispersion relation. The mathematical procedure utilized for the derivation is a modern variant of classical von Neumann analysis, and involves a 4D discrete space/time Fourier transform of the nine, coupled, FD updating formulae for particle velocity vector and stress tensor components. The method is generalized to seismic wave propagation within anelastic and poroelastic media, as well as sound wave propagation within a uniformly-moving atmosphere. A significant extension of the approach yields a stability condition for wave propagation across an interface between dissimilar media with strong material contrast (e.g., the earth's surface, the seabed

  7. Direct Numerical Simulation of Boiling Multiphase Flows: State-of-the-Art, Modeling, Algorithmic and Computer Needs

    SciTech Connect

    Nourgaliev R.; Knoll D.; Mousseau V.; Berry R.

    2007-04-01

    The state-of-the-art for Direct Numerical Simulation (DNS) of boiling multiphase flows is reviewed, focussing on potential of available computational techniques, the level of current success for their applications to model several basic flow regimes (film, pool-nucleate and wall-nucleate boiling -- FB, PNB and WNB, respectively). Then, we discuss multiphysics and multiscale nature of practical boiling flows in LWR reactors, requiring high-fidelity treatment of interfacial dynamics, phase-change, hydrodynamics, compressibility, heat transfer, and non-equilibrium thermodynamics and chemistry of liquid/vapor and fluid/solid-wall interfaces. Finally, we outline the framework for the {\\sf Fervent} code, being developed at INL for DNS of reactor-relevant boiling multiphase flows, with the purpose of gaining insight into the physics of multiphase flow regimes, and generating a basis for effective-field modeling in terms of its formulation and closure laws.

  8. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem

    2015-10-01

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  9. Parallel supercomputing: Advanced methods, algorithms and software for large-scale problems. Final report, August 1, 1987--July 31, 1994

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1994-12-31

    The focus of the subject DOE sponsored research concerns parallel methods, algorithms, and software for complex applications such as those in coupled fluid flow and heat transfer. The research has been directed principally toward the solution of large-scale PDE problems using iterative solvers for finite differences and finite elements on advanced computer architectures. This work embraces parallel domain decomposition, element-by-element, spectral, and multilevel schemes with adaptive parameter determination, rational iteration and related issues. In addition to the fundamental questions related to developing new methods and mapping these to parallel computers, there are important software issues. The group has played a significant role in the development of software both for iterative solvers and also for finite element codes. The research in computational fluid dynamics (CFD) led to sustained multi-Gigaflop performance rates for parallel-vector computations of realistic large scale applications (not computational kernels alone). The main application areas for these performance studies have been two-dimensional problems in CFD. Over the course of this DOE sponsored research significant progress has been made. A report of the progression of the research is given and at the end of the report is a list of related publications and presentations over the entire grant period.

  10. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  11. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  12. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  13. Keeping it Together: Advanced algorithms and software for magma dynamics (and other coupled multi-physics problems)

    NASA Astrophysics Data System (ADS)

    Spiegelman, M.; Wilson, C. R.

    2011-12-01

    A quantitative theory of magma production and transport is essential for understanding the dynamics of magmatic plate boundaries, intra-plate volcanism and the geochemical evolution of the planet. It also provides one of the most challenging computational problems in solid Earth science, as it requires consistent coupling of fluid and solid mechanics together with the thermodynamics of melting and reactive flows. Considerable work on these problems over the past two decades shows that small changes in assumptions of coupling (e.g. the relationship between melt fraction and solid rheology), can have profound changes on the behavior of these systems which in turn affects critical computational choices such as discretizations, solvers and preconditioners. To make progress in exploring and understanding this physically rich system requires a computational framework that allows more flexible, high-level description of multi-physics problems as well as increased flexibility in composing efficient algorithms for solution of the full non-linear coupled system. Fortunately, recent advances in available computational libraries and algorithms provide a platform for implementing such a framework. We present results from a new model building system that leverages functionality from both the FEniCS project (www.fenicsproject.org) and PETSc libraries (www.mcs.anl.gov/petsc) along with a model independent options system and gui, Spud (amcg.ese.ic.ac.uk/Spud). Key features from FEniCS include fully unstructured FEM with a wide range of elements; a high-level language (ufl) and code generation compiler (FFC) for describing the weak forms of residuals and automatic differentiation for calculation of exact and approximate jacobians. The overall strategy is to monitor/calculate residuals and jacobians for the entire non-linear system of equations within a global non-linear solve based on PETSc's SNES routines. PETSc already provides a wide range of solvers and preconditioners, from

  14. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  15. An advanced algorithm for construction of Integral Transport Matrix Method operators using accumulation of single cell coupling factors

    SciTech Connect

    Powell, B. P.; Azmy, Y. Y.

    2013-07-01

    The Integral Transport Matrix Method (ITMM) has been shown to be an effective method for solving the neutron transport equation in large domains on massively parallel architectures. In the limit of very large number of processors, the speed of the algorithm, and its suitability for unstructured meshes, i.e. other than an ordered Cartesian grid, is limited by the construction of four matrix operators required for obtaining the solution in each sub-domain. The existing algorithm used for construction of these matrix operators, termed the differential mesh sweep, is computationally expensive and was developed for a structured grid. This work proposes the use of a new algorithm for construction of these operators based on the construction of a single, fundamental matrix representing the transport of a particle along every possible path throughout the sub-domain mesh. Each of the operators is constructed by multiplying an element of this fundamental matrix by two factors dependent only upon the operator being constructed and on properties of the emitting and incident cells. The ITMM matrix operator construction time for the new algorithm is demonstrated to be shorter than the existing algorithm in all tested cases with both isotropic and anisotropic scattering considered. While also being a more efficient algorithm on a structured Cartesian grid, the new algorithm is promising in its geometric robustness and potential for being applied to an unstructured mesh, with the ultimate goal of application to an unstructured tetrahedral mesh on a massively parallel architecture. (authors)

  16. Numerical Modeling for Hole-Edge Cracking of Advanced High-Strength Steels (AHSS) Components in the Static Bend Test

    NASA Astrophysics Data System (ADS)

    Kim, Hyunok; Mohr, William; Yang, Yu-Ping; Zelenak, Paul; Kimchi, Menachem

    2011-08-01

    Numerical modeling of local formability, such as hole-edge cracking and shear fracture in bending of AHSS, is one of the challenging issues for simulation engineers for prediction and evaluation of stamping and crash performance of materials. This is because continuum-mechanics-based finite element method (FEM) modeling requires additional input data, "failure criteria" to predict the local formability limit of materials, in addition to the material flow stress data input for simulation. This paper presents a numerical modeling approach for predicting hole-edge failures during static bend tests of AHSS structures. A local-strain-based failure criterion and a stress-triaxiality-based failure criterion were developed and implemented in LS-DYNA simulation code to predict hole-edge failures in component bend tests. The holes were prepared using two different methods: mechanical punching and water-jet cutting. In the component bend tests, the water-jet trimmed hole showed delayed fracture at the hole-edges, while the mechanical punched hole showed early fracture as the bending angle increased. In comparing the numerical modeling and test results, the load-displacement curve, the displacement at the onset of cracking, and the final crack shape/length were used. Both failure criteria also enable the numerical model to differentiate between the local formability limit of mechanical-punched and water-jet-trimmed holes. The failure criteria and static bend test developed here are useful to evaluate the local formability limit at a structural component level for automotive crash tests.

  17. Advanced numerical methods for the simulation of flows in heterogeneous porous media and their application to parallel computing

    SciTech Connect

    Rame, M.

    1990-01-01

    Flows in highly heterogeneous porous media arise in a variety of processes including enhanced oil recovery, in situ bioremediation of underground contaminants, transport in underground aquifers and transport through biological membranes. The common denominator of these processes is the transport (and possibly reaction) of a multi-component fluid in several phases. A new numerical methodology for the analysis of flows in heterogeneous porous media is presented. Cases of miscible and immiscible displacement are simulated to investigate the influence of the local heterogeneities on the flow paths. This numerical scheme allows for a fine description of the flowing medium and the concentration and saturation distributions thus generated show low numerical dispersion. If the size of the area of interest is a square of a thousand feet per side, geological information on the porous medium can be incorporated to a length scale of about one to two feet. The technique here introduced, Operator Splitting on Multiple Grids, solves the elliptic operators by a higher-order finite-element technique on a coarse grid that proves efficient and accurate in incorporating different scales of heterogeneities. This coarse solution is interpolated to a fine grid by a splines-under-tension technique. The equations for the conservation of species are solved on this fine grid (of approximately half a million cells) by a finite-difference technique yielding numerical dispersions of less than ten feet. Cases presented herein involve a single phase miscible flow, and liquid-phase immiscible displacements. Cases are presented for model distributions of physical properties and for porosity and permeability data taken from a real reservoir. Techniques for the extension of the methods to compressible flow situations and compositional simulations are discussed.

  18. SU-E-T-313: The Accuracy of the Acuros XB Advanced Dose Calculation Algorithm for IMRT Dose Distributions in Head and Neck

    SciTech Connect

    Araki, F; Onizuka, R; Ohno, T; Tomiyama, Y; Hioki, K

    2014-06-01

    Purpose: To investigate the accuracy of the Acuros XB version 11 (AXB11) advanced dose calculation algorithm by comparing with Monte Caro (MC) calculations. The comparisons were performed with dose distributions for a virtual inhomogeneity phantom and intensity-modulated radiotherapy (IMRT) in head and neck. Methods: Recently, AXB based on Linear Boltzmann Transport Equation has been installed in the Eclipse treatment planning system (Varian Medical Oncology System, USA). The dose calculation accuracy of AXB11 was tested by the EGSnrc-MC calculations. In additions, AXB version 10 (AXB10) and Analytical Anisotropic Algorithm (AAA) were also used. First the accuracy of an inhomogeneity correction for AXB and AAA algorithms was evaluated by comparing with MC-calculated dose distributions for a virtual inhomogeneity phantom that includes water, bone, air, adipose, muscle, and aluminum. Next the IMRT dose distributions for head and neck were compared with the AXB and AAA algorithms and MC by means of dose volume histograms and three dimensional gamma analysis for each structure (CTV, OAR, etc.). Results: For dose distributions with the virtual inhomogeneity phantom, AXB was in good agreement with those of MC, except the dose in air region. The dose in air region decreased in order of MCalgorithms, ie: 0.700 MeV for MC, 0.711 MeV for AXB11, and 1.011 MeV for AXB 10. Since the AAA algorithm is based on the dose kernel of water, the doses in regions for air, bone, and aluminum considerably became higher than those of AXB and MC. The pass rates of the gamma analysis for IMRT dose distributions in head and neck were similar to those of MC in order of AXB11

  19. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  20. Evaluation of the Advanced-Canopy-Atmosphere-Surface Algorithm (ACASA Model) Using Eddy Covariance Technique Over Sparse Canopy

    NASA Astrophysics Data System (ADS)

    Marras, S.; Spano, D.; Sirca, C.; Duce, P.; Snyder, R.; Pyles, R. D.; Paw U, K. T.

    2008-12-01

    Land surface models are usually used to quantify energy and mass fluxes between terrestrial ecosystems and atmosphere on micro- and regional scales. One of the most elaborate land surface models for flux modelling is the Advanced Canopy-Atmosphere-Soil Algorithm (ACASA) model, which provides micro-scale as well as regional-scale fluxes when imbedded in a meso-scale meteorological model (e.g., MM5 or WRF). The model predicts vegetation conditions and changes with time due to plant responses to environment variables. In particular, fluxes and profiles of heat, water vapor, carbon and momentum within and above canopy are estimated using third-order equations. It also estimates turbulent profiles of velocity, temperature, humidity within and above canopy, and CO2 fluxes are estimated using a combination of Ball-Berry and Farquhar equations. The ACASA model is also able to include the effects of water stress on stomata, transpiration and CO2 assimilation. ACASA model is unique because it separates canopy domain into twenty atmospheric layers (ten layers within the canopy and ten layers above the canopy), and the soil is partitioned into fifteen layers of variable thickness. The model was mainly used over dense canopies in the past, so the aim of this work was to test the ACASA model over a sparse canopy as Mediterranean maquis. Vegetation is composed by sclerophyllous species of shrubs that are always green, with leathery leaves, small height, with a moderately sparse canopy, and that are tolerant at water stress condition. Eddy Covariance (EC) technique was used to collect continuous data for more than 3 years period. Field measurements were taken in a natural maquis site located near Alghero, Sardinia, Italy and they were used to parameterize and validate the model. The input values were selected by running the model several times varying the one parameter per time. A second step in the parameterization process was the simultaneously variation of some parameters

  1. Disruptive Innovation in Numerical Hydrodynamics

    SciTech Connect

    Waltz, Jacob I.

    2012-09-06

    We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.

  2. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. PMID:26337741

  3. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  4. Advanced Tsunami Numerical Simulations and Energy Considerations by use of 3D-2D Coupled Models: The October 11, 1918, Mona Passage Tsunami

    NASA Astrophysics Data System (ADS)

    López-Venegas, Alberto M.; Horrillo, Juan; Pampell-Manis, Alyssa; Huérfano, Victor; Mercado, Aurelio

    2015-06-01

    The most recent tsunami observed along the coast of the island of Puerto Rico occurred on October 11, 1918, after a magnitude 7.2 earthquake in the Mona Passage. The earthquake was responsible for initiating a tsunami that mostly affected the northwestern coast of the island. Runup values from a post-tsunami survey indicated the waves reached up to 6 m. A controversy regarding the source of the tsunami has resulted in several numerical simulations involving either fault rupture or a submarine landslide as the most probable cause of the tsunami. Here we follow up on previous simulations of the tsunami from a submarine landslide source off the western coast of Puerto Rico as initiated by the earthquake. Improvements on our previous study include: (1) higher-resolution bathymetry; (2) a 3D-2D coupled numerical model specifically developed for the tsunami; (3) use of the non-hydrostatic numerical model NEOWAVE (non-hydrostatic evolution of ocean WAVE) featuring two-way nesting capabilities; and (4) comprehensive energy analysis to determine the time of full tsunami wave development. The three-dimensional Navier-Stokes model tsunami solution using the Navier-Stokes algorithm with multiple interfaces for two fluids (water and landslide) was used to determine the initial wave characteristic generated by the submarine landslide. Use of NEOWAVE enabled us to solve for coastal inundation, wave propagation, and detailed runup. Our results were in agreement with previous work in which a submarine landslide is favored as the most probable source of the tsunami, and improvement in the resolution of the bathymetry yielded inundation of the coastal areas that compare well with values from a post-tsunami survey. Our unique energy analysis indicates that most of the wave energy is isolated in the wave generation region, particularly at depths near the landslide, and once the initial wave propagates from the generation region its energy begins to stabilize.

  5. Numerical Modeling for Springback Predictions by Considering the Variations of Elastic Modulus in Stamping Advanced High-Strength Steels (AHSS)

    NASA Astrophysics Data System (ADS)

    Kim, Hyunok; Kimchi, Menachem

    2011-08-01

    This paper presents a numerical modeling approach for predicting springback by considering the variations of elastic modulus on springback in stamping AHSS. Various stamping tests and finite-element method (FEM) simulation codes were used in this study. The cyclic loading-unloading tensile tests were conducted to determine the variations of elastic modulus for dual-phase (DP) 780 sheet steel. The biaxial bulge test was used to obtain plastic flow stress data. The non-linear reduction of elastic modulus for increasing the plastic strain was formulated by using the Yoshida model that was implemented in FEM simulations for springback. To understand the effects of material properties on springback, experiments were conducted with a simple geometry such as U-shape bending and the more complex geometry such as the curved flanging and S-rail stamping. Different measurement methods were used to confirm the final part geometry. Two different commercial FEM codes, LS-DYNA and DEFORM, were used to compare the experiments. The variable elastic modulus improved springback predictions in U-shape bending and curved flanging tests compared to FEM with the constant elastic modulus. However, in S-rail stamping tests, both FEM models with the isotropic hardening model showed limitations in predicting the sidewall curl of the S-rail part after springback. To consider the kinematic hardening and Bauschinger effects that result from material bending-unbending in S-rail stamping, the Yoshida model was used for FEM simulation of S-rail stamping and springback. The FEM predictions showed good improvement in correlating with experiments.

  6. GOAL: an inverse toxicity-related algorithm for daily clinical practice decision making in advanced kidney cancer.

    PubMed

    Bracarda, Sergio; Sisani, Michele; Marrocolo, Francesca; Hamzaj, Alketa; del Buono, Sabrina; De Simone, Valeria

    2014-03-01

    Metastatic renal cell carcinoma (mRCC), considered almost an orphan disease only six years ago, appears today a very dynamic pathology. The recently switch to the actual overcrowded scenario defined by seven active drugs has driven physicians to an incertitude status, due to difficulties in defining the best possible treatment strategy. This situation is mainly related to the absence of predictive biomarkers for any available or new therapy. Such issue, associated with the nearly absence of published face-to-face studies, draws a complex picture frame. In order to solve this dilemma, decisional algorithms tailored on drug efficacy data and patient profile are recognized as very useful tools. These approaches try to select the best therapy suitable for every patient profile. On the contrary, the present review has the "goal" to suggest a reverse approach: basing on the pivotal studies, post-marketing surveillance reports and our experience, we defined the polarizing toxicity (the most frequent toxicity in the light of clinical experience) for every single therapy, creating a new algorithm able to identify the patient profile, mainly comorbidities, unquestionably unsuitable for each single agent presently available for either the first- or the second-line therapy. The GOAL inverse decision-making algorithm, proposed at the end of this review, allows to select the best therapy for mRCC by reducing the risk of limiting toxicities. PMID:24309065

  7. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  8. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  9. Advanced algorithms and high-performance testbed for large-scale site characterization and subsurface target detection using airborne ground-penetrating SAR

    NASA Astrophysics Data System (ADS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1999-08-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, JPL, Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field, in Colorado, by using SRI airborne, ground penetrating, SAR. The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance testbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and maximum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accuracy UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data. In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma Proving Ground, AZ, acquired by SRI SAR.

  10. Numerical evaluation of longitudinal motions of Wigley hulls advancing in waves by using Bessho form translating-pulsating source Green'S function

    NASA Astrophysics Data System (ADS)

    Xiao, Wenbin; Dong, Wencai

    2016-06-01

    In the framework of 3D potential flow theory, Bessho form translating-pulsating source Green's function in frequency domain is chosen as the integral kernel in this study and hybrid source-and-dipole distribution model of the boundary element method is applied to directly solve the velocity potential for advancing ship in regular waves. Numerical characteristics of the Green function show that the contribution of local-flow components to velocity potential is concentrated at the nearby source point area and the wave component dominates the magnitude of velocity potential in the far field. Two kinds of mathematical models, with or without local-flow components taken into account, are adopted to numerically calculate the longitudinal motions of Wigley hulls, which demonstrates the applicability of translating-pulsating source Green's function method for various ship forms. In addition, the mesh analysis of discrete surface is carried out from the perspective of ship-form characteristics. The study shows that the longitudinal motion results by the simplified model are somewhat greater than the experimental data in the resonant zone, and the model can be used as an effective tool to predict ship seakeeping properties. However, translating-pulsating source Green function method is only appropriate for the qualitative analysis of motion response in waves if the ship geometrical shape fails to satisfy the slender-body assumption.

  11. Towards Direct Numerical Simulation of mass and energy fluxes at the soil-atmospheric interface with advanced Lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Krafczyk, Manfred; Geier, Martin; Schönherr, Martin

    2014-05-01

    The quantification of soil evaporation and of soil water content dynamics near the soil surface are critical in the physics of land-surface processes on many scales and are dominated by multi-component and multi-phase mass and energy fluxes between the ground and the atmosphere. Although it is widely recognized that both liquid and gaseous water movement are fundamental factors in the quantification of soil heat flux and surface evaporation, their computation has only started to be taken into account using simplified macroscopic models. As the flow field over the soil can be safely considered as turbulent, it would be natural to study the detailed transient flow dynamics by means of Large Eddy Simulation (LES [1]) where the three-dimensional flow field is resolved down to the laminar sub-layer. Yet this requires very fine resolved meshes allowing a grid resolution of at least one order of magnitude below the typical grain diameter of the soil under consideration. In order to gain reliable turbulence statistics, up to several hundred eddy turnover times have to be simulated which adds up to several seconds of real time. Yet, the time scale of the receding saturated water front dynamics in the soil is on the order of hours. Thus we are faced with the task of solving a transient turbulent flow problem including the advection-diffusion of water vapour over the soil-atmospheric interface represented by a realistic tomographic reconstruction of a real porous medium taken from laboratory probes. Our flow solver is based on the Lattice Boltzmann method (LBM) [2] which has been extended by a Cumulant approach similar to the one described in [3,4] to minimize the spurious coupling between the degrees of freedom in previous LBM approaches and can be used as an implicit LES turbulence model due to its low numerical dissipation and increased stability at high Reynolds numbers. The kernel has been integrated into the research code Virtualfluids [5] and delivers up to 30% of the

  12. Design of an advanced positron emission tomography detector system and algorithms for imaging small animal models of human disease

    NASA Astrophysics Data System (ADS)

    Foudray, Angela Marie Klohs

    Detecting, quantifying and visualizing biochemical mechanism in a living system without perturbing function is the goal of the instrument and algorithms designed in this thesis. Biochemical mechanisms of cells have long been known to be dependent on the signals they receive from their environment. Studying biological processes of cells in-vitro can vastly distort their function, since you are removing them from their natural chemical signaling environment. Mice have become the biological system of choice for various areas of biomedical research due to their genetic and physiological similarities with humans, the relatively low cost of their care, and their quick breeding cycle. Drug development and efficacy assessment along with disease detection, management, and mechanism research all have benefited from the use of small animal models of human disease. A high resolution, high sensitivity, three-dimensional (3D) positioning positron emission tomography (PET) detector system was designed through device characterization and Monte Carlo simulation. Position-sensitive avalanche photodiodes (PSAPDs) were characterized in various packaging configurations; coupled to various configurations of lutetium oxyorthosilicate (LSO) scintillation crystals. Forty novelly packaged final design devices were constructed and characterized, each providing characteristics superior to commercially available scintillation detectors used in small animal imaging systems: ˜1mm crystal identification, 14-15% of 511 keV energy resolution, and averaging 1.9 to 5.6 ns coincidence time resolution. A closed-cornered box-shaped detector configuration was found to provide optimal photon sensitivity (˜10.5% in the central plane) using dual LSO-PSAPD scintillation detector modules and Monte Carlo simulation. Standard figures of merit were used to determine optimal system acquisition parameters. A realistic model for constituent devices was developed for understanding the signals reported by the

  13. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1995-01-01

    Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

  14. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  15. Advancements of in-flight mass moment of inertia and structural deflection algorithms for satellite attitude simulators

    NASA Astrophysics Data System (ADS)

    Wright, Jonathan W.

    Experimental satellite attitude simulators have long been used to test and analyze control algorithms in order to drive down risk before implementation on an operational satellite. Ideally, the dynamic response of a terrestrial-based experimental satellite attitude simulator would be similar to that of an on-orbit satellite. Unfortunately, gravitational disturbance torques and poorly characterized moments of inertia introduce uncertainty into the system dynamics leading to questionable attitude control algorithm experimental results. This research consists of three distinct, but related contributions to the field of developing robust satellite attitude simulators. In the first part of this research, existing approaches to estimate mass moments and products of inertia are evaluated followed by a proposition and evaluation of a new approach that increases both the accuracy and precision of these estimates using typical on-board satellite sensors. Next, in order to better simulate the micro-torque environment of space, a new approach to mass balancing satellite attitude simulator is presented, experimentally evaluated, and verified. Finally, in the third area of research, we capitalize on the platform improvements to analyze a control moment gyroscope (CMG) singularity avoidance steering law. Several successful experiments were conducted with the CMG array at near-singular configurations. An evaluation process was implemented to verify that the platform remained near the desired test momentum, showing that the first two components of this research were effective in allowing us to conduct singularity avoidance experiments in a representative space-like test environment.

  16. Algorithmic Differentiation for Calculus-based Optimization

    NASA Astrophysics Data System (ADS)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  17. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  18. Advanced Control Algorithms for Compensating the Phase Distortion Due to Transport Delay in Human-Machine Systems

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline

  19. Final Progress Report: Collaborative Research: Decadal-to-Centennial Climate & Climate Change Studies with Enhanced Variable and Uniform Resolution GCMs Using Advanced Numerical Techniques

    SciTech Connect

    Fox-Rabinovitz, M; Cote, J

    2009-06-05

    The joint U.S-Canadian project has been devoted to: (a) decadal climate studies using developed state-of-the-art GCMs (General Circulation Models) with enhanced variable and uniform resolution; (b) development and implementation of advanced numerical techniques; (c) research in parallel computing and associated numerical methods; (d) atmospheric chemistry experiments related to climate issues; (e) validation of regional climate modeling strategies for nested- and stretched-grid models. The variable-resolution stretched-grid (SG) GCMs produce accurate and cost-efficient regional climate simulations with mesoscale resolution. The advantage of the stretched grid approach is that it allows us to preserve the high quality of both global and regional circulations while providing consistent interactions between global and regional scales and phenomena. The major accomplishment for the project has been the successful international SGMIP-1 and SGMIP-2 (Stretched-Grid Model Intercomparison Project, phase-1 and phase-2) based on this research developments and activities. The SGMIP provides unique high-resolution regional and global multi-model ensembles beneficial for regional climate modeling and broader modeling community. The U.S SGMIP simulations have been produced using SciDAC ORNL supercomputers. Collaborations with other international participants M. Deque (Meteo-France) and J. McGregor (CSIRO, Australia) and their centers and groups have been beneficial for the strong joint effort, especially for the SGMIP activities. The WMO/WCRP/WGNE endorsed the SGMIP activities in 2004-2008. This project reflects a trend in the modeling and broader communities to move towards regional and sub-regional assessments and applications important for the U.S. and Canadian public, business and policy decision makers, as well as for international collaborations on regional, and especially climate related issues.

  20. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    NASA Astrophysics Data System (ADS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  1. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    SciTech Connect

    Razali, Azhani Mohd Abdullah, Jaafar

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  2. Probabilistic structural analysis algorithm development for computational efficiency

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1991-01-01

    The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.

  3. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  4. GPU Accelerated Event Detection Algorithm

    SciTech Connect

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.

  5. A numerical algorithm to evaluate the transient response for a synchronous scanning streak camera using a time-domain Baum-Liu-Tesche equation

    NASA Astrophysics Data System (ADS)

    Pei, Chengquan; Tian, Jinshou; Wu, Shengli; He, Jiai; Liu, Zhen

    2016-10-01

    The transient response is of great influence on the electromagnetic compatibility of synchronous scanning streak cameras (SSSCs). In this paper we propose a numerical method to evaluate the transient response of the scanning deflection plate (SDP). First, we created a simplified circuit model for the SDP used in an SSSC, and then derived the Baum-Liu-Tesche (BLT) equation in the frequency domain. From the frequency-domain BLT equation, its transient counterpart was derived. These parameters, together with the transient-BLT equation, were used to compute the transient load voltage and load current, and then a novel numerical method to fulfill the continuity equation was used. Several numerical simulations were conducted to verify this proposed method. The computed results were then compared with transient responses obtained by a frequency-domain/fast Fourier transform (FFT) method, and the accordance was excellent for highly conducting cables. The benefit of deriving the BLT equation in the time domain is that it may be used with slight modifications to calculate the transient response and the error can be controlled by a computer program. The result showed that the transient voltage was up to 1000 V and the transient current was approximately 10 A, so some protective measures should be taken to improve the electromagnetic compatibility.

  6. Comment on "Symplectic integration of magnetic systems": A proof that the Boris algorithm is not variational

    NASA Astrophysics Data System (ADS)

    Ellison, C. L.; Burby, J. W.; Qin, H.

    2015-11-01

    The Boris algorithm is a popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1-4]. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1,5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of the conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6-11].

  7. IMPROVED GROUND TRUTH IN SOUTHERN ASIA USING IN-COUNTRY DATA, ANALYST WAVEFORM REVIEW, AND ADVANCED ALGORITHMS

    SciTech Connect

    Engdahl, Eric, R.; Bergman, Eric, A.; Myers, Stephen, C.; Ryall, Floriana

    2009-06-19

    respective errors. This is a significant advance, as outliers and future events with apparently anomalous depths can be readily identified and, if necessary, further investigated. The patterns of reliable focal depth distributions have been interpreted in the context of Middle Eastern active tectonics. Most earthquakes in the Iranian continental lithosphere occur in the upper crust, less than about 25-30 km in depth, with the crustal shortening produced by continental collision apparently accommodated entirely by thickening and distributed deformation rather than by subduction of crust into the mantle. However, intermediate-depth earthquakes associated with subducted slab do occur across the central Caspian Sea and beneath the Makran coast. A multiple-event relocation technique, specialized to use different kinds of near-source data, is used to calibrate the locations of 24 clusters containing 901 events drawn from the seismicity catalog. The absolute locations of these clusters are fixed either by comparing the pattern of relocated earthquakes with mapped fault geometry, by using one or more cluster events that have been accurately located independently by a local seismic network or aftershock deployment, by using InSAR data to determine the rupture zone of shallow earthquakes, or by some combination of these near-source data. This technique removes most of the systematic bias in single-event locations done with regional and teleseismic data, resulting in 624 calibrated events with location uncertainties of 5 km or better at the 90% confidence level (GT590). For 21 clusters (847 events) that are calibrated in both location and origin time we calculate empirical travel times, relative to a standard 1-D travel time model (ak135), and investigate event to station travel-time anomalies as functions of epicentral distance and azimuth. Substantial travel-time anomalies are seen in the Iran region which make accurate locations impossible unless observing stations are at very short

  8. A frictional sliding algorithm for liquid droplets

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2016-08-01

    This work presents a new frictional sliding algorithm for liquid menisci in contact with solid substrates. In contrast to solid-solid contact, the liquid-solid contact behavior is governed by the contact line, where a contact angle forms and undergoes hysteresis. The new algorithm admits arbitrary meniscus shapes and arbitrary substrate roughness, heterogeneity and compliance. It is discussed and analyzed in the context of droplet contact, but it also applies to liquid films and solids with surface tension. The droplet is modeled as a stabilized membrane enclosing an incompressible medium. The contact formulation is considered rate-independent such that hydrostatic conditions apply. Three distinct contact algorithms are needed to describe the cases of frictionless surface contact, frictionless line contact and frictional line contact. For the latter, a predictor-corrector algorithm is proposed in order to enforce the contact conditions at the contact line and thus distinguish between the cases of advancing, pinning and receding. The algorithms are discretized within a monolithic finite element formulation. Several numerical examples are presented to illustrate the numerical and physical behavior of sliding droplets.

  9. Numerical simulation of wall-bounded turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Moin, P.

    1982-01-01

    Developments in three dimensional, time dependent numerical simulation of turbulent flows bounded by a wall are reviewed. Both direct and large eddy simulation techniques are considered within the same computational framework. The computational spatial grid requirements as dictated by the known structure of turbulent boundary layers are presented. The numerical methods currently in use are reviewed and some of the features of these algorithms, including spatial differencing and accuracy, time advancement, and data management are discussed. A selection of the results of the recent calculations of turbulent channel flow, including the effects of system rotation and transpiration on the flow are included.

  10. An analysis on changes in reservoir fluid based on numerical simulation of neutron log using a Monte Carlo N-Particle algorithm

    NASA Astrophysics Data System (ADS)

    Ku, B.; Nam, M.

    2012-12-01

    Neutron logging has been widely used to estimate neutron porosity to evaluate formation properties in oil industry. More recently, neutron logging has been highlighted for monitoring the behavior of CO2 injected into reservoir for geological CO2 sequestration. For a better understanding of neutron log interpretation, Monte Carlo N-Particle (MCNP) algorithm is used to illustrate the response of a neutron tool. In order to obtain calibration curves for the neutron tool, neutron responses are simulated in water-filled limestone, sandstone and dolomite formations of various porosities. Since the salinities (concentration of NaCl) of borehole fluid and formation water are important factors for estimating formation porosity, we first compute and analyze neutron responses for brine-filled formations with different porosities. Further, we consider changes in brine saturation of a reservoir due to hydrocarbon production or geological CO2 sequestration to simulate corresponding neutron logging data. As gas saturation decreases, measured neutron porosity confirms gas effects on neutron logging, which is attributed to the fact that gas has slightly smaller number of hydrogen than brine water. In the meantime, increase in CO2 saturation due to CO2 injection reduces measured neutron porosity giving a clue to estimation the CO2 saturation, since the injected CO2 substitute for the brine water. A further analysis on the reduction gives a strategy for estimating CO2 saturation based on time-lapse neutron logging. This strategy can help monitoring not only geological CO2 sequestration but also CO2 flood for enhanced-oil-recovery. Acknowledgements: This work was supported by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2012T100201588). Myung Jin Nam was partially supported by the National Research Foundation of Korea(NRF) grant funded by the Korea

  11. Development of a numerical scheme to predict geomagnetic storms after intense solar events and geomagnetic activity 27 days in advance. Final report, 6 Aug 86-16 Nov 90

    SciTech Connect

    Akasofu, S.I.; Lee, L.H.

    1991-02-01

    The modern geomagnetic storm prediction scheme should be based on a numerical simulation method, rather than on a statistical result. Furthermore, the scheme should be able to predict the geomagnetic storm indices, such as the Dst and AE indices, as a function of time. By recognizing that geomagnetic storms are powered by the solar wind-magnetosphere generator and that its power is given in terms of the solar wind speed, the interplanetary magnetic field (IMF) magnitude and polar angle, the authors have made a major advance in predicting both flare-induced storms and recurrent storms. Furthermore, it is demonstrated that the prediction scheme can be calibrated using the interplanetary scintillation (IPS) observation, when the solar disturbance advances about half-way to the earth. It is shown, however, that we are still far from a reliable prediction scheme. The prediction of the IMF polar angle requires future advance in understanding characteristics of magnetic clouds.

  12. Comment on “Symplectic integration of magnetic systems”: A proof that the Boris algorithm is not variational

    DOE PAGES

    Ellison, C. L.; Burby, J. W.; Qin, H.

    2015-11-01

    One popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1], [2], [3] and [4] is the Boris algorithm. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1] and [5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Moreover, without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of themore » conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6], [7], [8], [9], [10] and [11].« less

  13. Comment on “Symplectic integration of magnetic systems”: A proof that the Boris algorithm is not variational

    SciTech Connect

    Ellison, C. L.; Burby, J. W.; Qin, H.

    2015-11-01

    One popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1], [2], [3] and [4] is the Boris algorithm. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1] and [5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Moreover, without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of the conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6], [7], [8], [9], [10] and [11].

  14. Advances and trends in computational structures technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Venneri, S. L.

    1990-01-01

    The major goals of computational structures technology (CST) are outlined, and recent advances in CST are examined. These include computational material modeling, stochastic-based modeling, computational methods for articulated structural dynamics, strategies and numerical algorithms for new computing systems, multidisciplinary analysis and optimization. The role of CST in the future development of structures technology and the multidisciplinary design of future flight vehicles is addressed, and the future directions of CST research in the prediction of failures of structural components, the solution of large-scale structural problems, and quality assessment and control of numerical simulations are discussed.

  15. Variational Algorithms for Drift and Collisional Guiding Center Dynamics

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2014-10-01

    The simulation of guiding center test particle dynamics in the upcoming generation of magnetic confinement devices requires novel numerical methods to obtain the necessary long-term numerical fidelity. Geometric algorithms, which retain conserved quantities in the numerical time advances, are well-known to exhibit excellent long simulation time behavior. Due to the non-canonical Hamiltonian structure of the guiding center equations of motion, it is only recently that geometric algorithms have been developed for guiding center dynamics. This poster will discuss and compare several families of variational algorithms for application to 3-D guiding center test particle studies, while benchmarking the methods against standard Runge-Kutta techniques. Time-to-solution improvements using GPGPU hardware will be presented. Additionally, collisional dynamics will be incorporated into the structure-preserving guiding center algorithms for the first time. Non-Hamiltonian effects, such as polarization drag and simplified stochastic operators, can be incorporated using a Lagrange-d'Alembert variational principle. The long-time behavior of variational algorithms which include dissipative dynamics will be compared against standard techniques. This work was supported by DOE Contract DE-AC02-09CH11466.

  16. Numerical quadrature for slab geometry transport algorithms

    SciTech Connect

    Hennart, J.P.; Valle, E. del

    1995-12-31

    In recent papers, a generalized nodal finite element formalism has been presented for virtually all known linear finite difference approximations to the discrete ordinates equations in slab geometry. For a particular angular directions {mu}, the neutron flux {Phi} is approximated by a piecewise function Oh, which over each space interval can be polynomial or quasipolynomial. Here we shall restrict ourselves to the polynomial case. Over each space interval, {Phi} is a polynomial of degree k, interpolating parameters given by in the continuous and discontinuous cases, respectively. The angular flux at the left and right ends and the k`th Legendre moment of {Phi} over the cell considered are represented as.

  17. The science of and advanced technology for cost-effective manufacture of high precision engineering products. Volume 4. Thermal effects on the accuracy of numerically controlled machine tool

    NASA Astrophysics Data System (ADS)

    Venugopal, R.; Barash, M. M.; Liu, C. R.

    1985-10-01

    Thermal effects on the accuracy of numerically controlled machine tools are specially important in the context of unmanned manufacture or under conditions of precision metal cutting. Removal of the operator from the direct control of the metal cutting process has created problems in terms of maintaining accuracy. The objective of this research is to study thermal effects on the accuracy of numerically controlled machine tools. The initial part of the research report is concerned with the analysis of a hypothetical machine. The thermal characteristics of this machine are studied. Numerical methods for evaluating the errors exhibited by the slides of the machine are proposed and the possibility of predicting thermally induced errors by the use of regression equations is investigated. A method for computing the workspace error is also presented. The final part is concerned with the actual measurement of errors on a modern CNC machining center. Thermal influences on the errors is the main objective of the experimental work. Thermal influences on the errors of machine tools are predictable. Techniques for determining thermal effects on machine tools at a design stage are also presented. ; Error models and prediction; Metrology; Automation.

  18. Note on symmetric BCJ numerator

    NASA Astrophysics Data System (ADS)

    Fu, Chih-Hao; Du, Yi-Jian; Feng, Bo

    2014-08-01

    We present an algorithm that leads to BCJ numerators satisfying manifestly the three properties proposed by Broedel and Carrasco in [42]. We explicitly calculate the numerators at 4, 5 and 6-points and show that the relabeling property is generically satisfied.

  19. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  20. Numerical modeling of late Glacial Laurentide advance of ice across Hudson Strait: Insights into terrestrial and marine geology, mass balance, and calving flux

    USGS Publications Warehouse

    Pfeffer, W.T.; Dyurgerov, M.; Kaplan, M.; Dwyer, J.; Sassolas, C.; Jennings, A.; Raup, B.; Manley, W.

    1997-01-01

    A time-dependent finite element model was used to reconstruct the advance of ice from a late Glacial dome on northern Quebec/Labrador across Hudson Strait to Meta Incognita Peninsula (Baffin Island) and subsequently to the 9.9-9.6 ka 14C Gold Cove position on Hall Peninsula. Terrestrial geological and geophysical information from Quebec and Labrador was used to constrain initial and boundary conditions, and the model results are compared with terrestrial geological information from Baffin Island and considered in the context of the marine event DC-0 and the Younger Dryas cooling. We conclude that advance across Hudson Strait from Ungava Bay to Baffin Island is possible using realistic glacier physics under a variety of reasonable boundary conditions. Production of ice flux from a dome centered on northeastern Quebec and Labrador sufficient to deliver geologically inferred ice thickness at Gold Cove (Hall Peninsula) appears to require extensive penetration of sliding south from Ungava Bay. The discharge of ice into the ocean associated with advance and retreat across Hudson Strait does not peak at a time coincident with the start of the Younger Dryas and is less than minimum values proposed to influence North Atlantic thermohaline circulation; nevertheless, a significant fraction of freshwater input to the North Atlantic may have been provided abruptly and at a critical time by this event.

  1. UNEDF: Advanced Scientific Computing Transforms the Low-Energy Nuclear Many-Body Problem

    SciTech Connect

    Stoitsov, Mario; Nam, Hai Ah; Nazarewicz, Witold; Bulgac, Aurel; Hagen, Gaute; Kortelainen, E. M.; Pei, Junchen; Roche, K. J.; Schunck, N.; Thompson, I.; Vary, J. P.; Wild, S.

    2011-01-01

    The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper illustrates significant milestones accomplished by UNEDF through integration of the theoretical approaches, advanced numerical algorithms, and leadership class computational resources.

  2. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  3. Variational Algorithms for Test Particle Trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  4. Advanced computing

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Advanced concepts in hardware, software and algorithms are being pursued for application in next generation space computers and for ground based analysis of space data. The research program focuses on massively parallel computation and neural networks, as well as optical processing and optical networking which are discussed under photonics. Also included are theoretical programs in neural and nonlinear science, and device development for magnetic and ferroelectric memories.

  5. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  6. Numerical procedures for solving nonsymmetric eigenvalue problems associated with optical resonators.

    PubMed

    Murphy, W D; Bernabe, M L

    1978-08-01

    The Prony method is extended to handle the nonsymmetric algebraic eigenvalue problem and improved to search automatically for the number of dominant eigenvalues. A simple iterative algorithm is given to compute the associated eigenvectors. Resolution studies using the QR method are made in order to determine the accuracy of the matrix approximation. Numerical results are given for both simple well defined resonators and more complex advanced designs containing multiple propagation geometries and misaligned mirrors.

  7. Genetic algorithms for multicriteria shape optimization of induction furnace

    NASA Astrophysics Data System (ADS)

    Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo

    2012-09-01

    In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.

  8. Frontiers in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Evans, Charles R.; Finn, Lee S.; Hobill, David W.

    2011-06-01

    Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics

  9. Interpolation algorithms for machine tools

    SciTech Connect

    Burleson, R.R.

    1981-08-01

    There are three types of interpolation algorithms presently used in most numerical control systems: digital differential analyzer, pulse-rate multiplier, and binary-rate multiplier. A method for higher order interpolation is in the experimental stages. The trends point toward the use of high-speed micrprocessors to perform these interpolation algorithms.

  10. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  11. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  12. Numerical Simulations of Wing-Body Junction Flows

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, R.; Cagle, C.; Chandra, S.

    1996-01-01

    The goal of the research project is to contribute to the optimized design of fan bypass systems in advanced turbofan engines such as the Advanced Ducted Propulsors (ADP). The immediate objective is to perform numerical simulation of duct-strut interactions to elucidate the loss mechanisms associated with this configuration that is characteristic of ADP. These numerical simulations would complement an experimental study being undertaken at Purdue University. As the first step in the process, a numerical study of wing-body junction flow is being undertaken as it shares a number of characteristics with the duct-strut interaction flow. The presence of the characteristic horseshoe vortex and the associated secondary flow are the salient features that contribute to making this flow a challenge to predict numerically. The simulations will be performed with the NPARC code on the CRAY Y-MP platform at LeRC. The grids for the simulation have been generated using an algebraic mapping technique with a multisurface algorithm.

  13. Numerical nebulae

    NASA Astrophysics Data System (ADS)

    Rijkhorst, Erik-Jan

    2005-12-01

    The late stages of evolution of stars like our Sun are dominated by several episodes of violent mass loss. Space based observations of the resulting objects, known as Planetary Nebulae, show a bewildering array of highly symmetric shapes. The interplay between gasdynamics and radiative processes determines the morphological outcome of these objects, and numerical models for astrophysical gasdynamics have to incorporate these effects. This thesis presents new numerical techniques for carrying out high-resolution three-dimensional radiation hydrodynamical simulations. Such calculations require parallelization of computer codes, and the use of state-of-the-art supercomputer technology. Numerical models in the context of the shaping of Planetary Nebulae are presented, providing insight into their origin and fate.

  14. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  15. Final Report-Optimization Under Uncertainty and Nonconvexity: Algorithms and Software

    SciTech Connect

    Jeff Linderoth

    2008-10-10

    The goal of this research was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems.

  16. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  17. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  18. Investigation of an Immunoassay with Broad Specificity to Quinolone Drugs by Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Data Sets and Advanced Quantitative Structure-Activity Relationship Analysis.

    PubMed

    Chen, Jiahong; Lu, Ning; Shen, Xing; Tang, Qiushi; Zhang, Chijian; Xu, Jun; Sun, Yuanming; Huang, Xin-An; Xu, Zhenlin; Lei, Hongtao

    2016-04-01

    A polyclonal antibody against the quinolone drug pazufloxacin (PAZ) but with surprisingly broad specificity was raised to simultaneously detect 24 quinolones (QNs). The developed competitive indirect enzyme-linked immunosorbent assay (ciELISA) exhibited limits of detection (LODs) for the 24 QNs ranging from 0.45 to 15.16 ng/mL, below the maximum residue levels (MRLs). To better understand the obtained broad specificity, a genetic algorithm with linear assignment of hypermolecular alignment of data sets (GALAHAD) was used to generate the desired pharmacophore model and superimpose the QNs, and then advanced comparative molecular field analysis (CoMFA) and advanced comparative molecular similarity indices analysis (CoMSIA) models were employed to study the three-dimensional quantitative structure-activity relationship (3D QSAR) between QNs and the antibody. It was found that the QNs could interact with the antibody with different binding poses, and cross-reactivity was mainly positively correlated with the bulky substructure containing electronegative atom at the 7-position, while it was negatively associated with the large bulky substructure at the 1-position of QNs.

  19. Advanced Numerical Modeling of Turbulent Atmospheric Flows

    NASA Astrophysics Data System (ADS)

    Kühnlein, Christian; Dörnbrack, Andreas; Gerz, Thomas

    The present chapter introduces the method of computational simulation to predict and study turbulent atmospheric flows. This includes a description of the fundamental approach to computational simulation and the practical implementation using the technique of large-eddy simulation. In addition, selected contributions from IPA scientists to computational model development and various examples for applications are given. These examples include homogeneous turbulence, convective boundary layers, heated forest canopy, buoyant thermals, and large-scale flows with baroclinic wave instability.

  20. Uniformly stable backpropagation algorithm to train a feedforward neural network.

    PubMed

    Rubio, José de Jesús; Angelov, Plamen; Pacheco, Jaime

    2011-03-01

    Neural networks (NNs) have numerous applications to online processes, but the problem of stability is rarely discussed. This is an extremely important issue because, if the stability of a solution is not guaranteed, the equipment that is being used can be damaged, which can also cause serious accidents. It is true that in some research papers this problem has been considered, but this concerns continuous-time NN only. At the same time, there are many systems that are better described in the discrete time domain such as population of animals, the annual expenses in an industry, the interest earned by a bank, or the prediction of the distribution of loads stored every hour in a warehouse. Therefore, it is of paramount importance to consider the stability of the discrete-time NN. This paper makes several important contributions. 1) A theorem is stated and proven which guarantees uniform stability of a general discrete-time system. 2) It is proven that the backpropagation (BP) algorithm with a new time-varying rate is uniformly stable for online identification and the identification error converges to a small zone bounded by the uncertainty. 3) It is proven that the weights' error is bounded by the initial weights' error, i.e., overfitting is eliminated in the proposed algorithm. 4) The BP algorithm is applied to predict the distribution of loads that a transelevator receives from a trailer and places in the deposits in a warehouse every hour, so that the deposits in the warehouse are reserved in advance using the prediction results. 5) The BP algorithm is compared with the recursive least square (RLS) algorithm and with the Takagi-Sugeno type fuzzy inference system in the problem of predicting the distribution of loads in a warehouse, giving that the first and the second are stable and the third is unstable. 6) The BP algorithm is compared with the RLS algorithm and with the Kalman filter algorithm in a synthetic example.

  1. Recent Advances in Vibroacoustics

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; McNelis, Mark E.

    2002-01-01

    Numerous vibroacoustics advances and impacts in the aerospace industry have occurred over the last 15 years. This article addresses some of these that developed from engineering programmatic task-work at the NASA Glenn Research Center at Lewis Field.

  2. Numerical methods for engine-airframe integration

    SciTech Connect

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.

  3. Numerical construction of the Hill functions.

    NASA Technical Reports Server (NTRS)

    Segethova, J.

    1972-01-01

    As an aid in the numerical construction of Hill functions and their derivatives, an algorithm using local coordinates and an expansion in Legendre polynomials is proposed. The algorithm is shown to possess sufficient stability, and the orthogonality of the Legendre polynomials simplifies the computation when the Ritz-Galerkin technique is used.

  4. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  5. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  6. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  7. Exploring the use of numerical relativity waveforms in burst analysis of precessing black hole mergers

    SciTech Connect

    Fischetti, Sebastian; Cadonati, Laura; Mohapatra, Satyanarayan R. P.; Healy, James; London, Lionel; Shoemaker, Deirdre

    2011-02-15

    Recent years have witnessed tremendous progress in numerical relativity and an ever improving performance of ground-based interferometric gravitational wave detectors. In preparation for the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO) and a new era in gravitational wave astronomy, the numerical relativity and gravitational wave data analysis communities are collaborating to ascertain the most useful role for numerical relativity waveforms in the detection and characterization of binary black hole coalescences. In this paper, we explore the detectability of equal mass, merging black hole binaries with precessing spins and total mass M{sub T}(set-membership sign)[80,350]M{sub {center_dot}}, using numerical relativity waveforms and templateless search algorithms designed for gravitational wave bursts. In particular, we present a systematic study using waveforms produced by the MayaKranc code that are added to colored, Gaussian noise and analyzed with the Omega burst search algorithm. Detection efficiency is weighed against the orientation of one of the black-hole's spin axes. We find a strong correlation between the detection efficiency and the radiated energy and angular momentum, and that the inclusion of the l=2, m={+-}1, 0 modes, at a minimum, is necessary to account for the full dynamics of precessing systems.

  8. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. I - The dynamics of time discretization and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1991-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  9. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. Part 1: The ODE connection and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1990-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  10. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  11. Final Progress Report submitted via the DOE Energy Link (E-Link) in June 2009 [Collaborative Research: Decadal-to-Centennial Climate & Climate Change Studies with Enhanced Variable and Uniform Resolution GCMs Using Advanced Numerical Techniques

    SciTech Connect

    Fox-Rabinovitz, M; Cote, J

    2009-10-09

    The joint U.S-Canadian project has been devoted to: (a) decadal climate studies using developed state-of-the-art GCMs (General Circulation Models) with enhanced variable and uniform resolution; (b) development and implementation of advanced numerical techniques; (c) research in parallel computing and associated numerical methods; (d) atmospheric chemistry experiments related to climate issues; (e) validation of regional climate modeling strategies for nested- and stretched-grid models. The variable-resolution stretched-grid (SG) GCMs produce accurate and cost-efficient regional climate simulations with mesoscale resolution. The advantage of the stretched grid approach is that it allows us to preserve the high quality of both global and regional circulations while providing consistent interactions between global and regional scales and phenomena. The major accomplishment for the project has been the successful international SGMIP-1 and SGMIP-2 (Stretched-Grid Model Intercomparison Project, phase-1 and phase-2) based on this research developments and activities. The SGMIP provides unique high-resolution regional and global multi-model ensembles beneficial for regional climate modeling and broader modeling community. The U.S SGMIP simulations have been produced using SciDAC ORNL supercomputers. The results of the successful SGMIP multi-model ensemble simulations of the U.S. climate are available at the SGMIP web site (http://essic.umd.edu/~foxrab/sgmip.html) and through the link to the WMO/WCRP/WGNE web site: http://collaboration.cmc.ec.gc.ca/science/wgne. Collaborations with other international participants M. Deque (Meteo-France) and J. McGregor (CSIRO, Australia) and their centers and groups have been beneficial for the strong joint effort, especially for the SGMIP activities. The WMO/WCRP/WGNE endorsed the SGMIP activities in 2004-2008. This project reflects a trend in the modeling and broader communities to move towards regional and sub-regional assessments and

  12. Numerical wave propagation in ImageJ.

    PubMed

    Piedrahita-Quintero, Pablo; Castañeda, Raul; Garcia-Sucerquia, Jorge

    2015-07-20

    An ImageJ plugin for numerical wave propagation is presented. The plugin provides ImageJ, the well-known software for image processing, with the capability of computing numerical wave propagation by the use of angular spectrum, Fresnel, and Fresnel-Bluestein algorithms. The plugin enables numerical wave propagation within the robust environment provided by the complete set of built-in tools for image processing available in ImageJ. The plugin can be used for teaching and research purposes. We illustrate its use to numerically recreate Poisson's spot and Babinet's principle, and in the numerical reconstruction of digitally recorded holograms from millimeter-sized and pure phase microscopic objects.

  13. Efficient Nonlinear Programming Algorithms for Chemical Process Control and Operations

    NASA Astrophysics Data System (ADS)

    Biegler, Lorenz T.

    Optimization is applied in numerous areas of chemical engineering including the development of process models from experimental data, design of process flowsheets and equipment, planning and scheduling of chemical process operations, and the analysis of chemical processes under uncertainty and adverse conditions. These off-line tasks require the solution of nonlinear programs (NLPs) with detailed, large-scale process models. Recently, these tasks have been complemented by time-critical, on-line optimization problems with differential-algebraic equation (DAE) process models that describe process behavior over a wide range of operating conditions, and must be solved sufficiently quickly. This paper describes recent advances in this area especially with dynamic models. We outline large-scale NLP formulations and algorithms as well as NLP sensitivity for on-line applications, and illustrate these advances on a commercial-scale low density polyethylene (LDPE) process.

  14. Seismic-acoustic finite-difference wave propagation algorithm.

    SciTech Connect

    Preston, Leiph; Aldridge, David Franklin

    2010-10-01

    An efficient numerical algorithm for treating earth models composed of fluid and solid portions is obtained via straightforward modifications to a 3D time-domain finite-difference algorithm for simulating isotropic elastic wave propagation.

  15. Spurious Numerical Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1995-01-01

    Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.

  16. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  17. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  18. Numerical computation of transonic flow governed by the full-potential equation

    NASA Technical Reports Server (NTRS)

    Holst, T. L.

    1983-01-01

    Numerical solution techniques for solving transonic flow fields governed by the full potential equation are discussed. In a general sense relaxation schemes suitable for the numerical solution of elliptic partial differential equations are presented and discussed with emphasis on transonic flow applications. The presentation can be divided into two general categories: An introductory treatment of the basic concepts associated with the numerical solution of elliptic partial differential equations and a more advanced treatment of current procedures used to solve the full potential equation for transonic flow fields. The introductory material is presented for completeness and includes a brief introduction (Chapter 1), governing equations (Chapter 2), classical relaxation schemes (Chapter 3), and early concepts regarding transonic full potential equation algorithms (Chapter 4).

  19. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  20. Advanced Algorithms and Statistics for MOS Surveys

    NASA Astrophysics Data System (ADS)

    Bolton, A. S.

    2016-10-01

    This paper presents an individual view on the current state of computational data processing and statistics for inference and discovery in multi-object spectroscopic surveys, supplemented by a historical perspective and a few present-day applications. It is more op-ed than review, and hopefully more readable as a result.

  1. Advances in time-scale algorithms

    NASA Technical Reports Server (NTRS)

    Stein, S. R.

    1993-01-01

    The term clock is usually used to refer to a device that counts a nearly periodic signal. A group of clocks, called an ensemble, is often used for time keeping in mission critical applications that cannot tolerate loss of time due to the failure of a single clock. The time generated by the ensemble of clocks is called a time scale. The question arises how to combine the times of the individual clocks to form the time scale. One might naively be tempted to suggest the expedient of averaging the times of the individual clocks, but a simple thought experiment demonstrates the inadequacy of this approach. Suppose a time scale is composed of two noiseless clocks having equal and opposite frequencies. The mean time scale has zero frequency. However if either clock fails, the time-scale frequency immediately changes to the frequency of the remaining clock. This performance is generally unacceptable and simple mean time scales are not used. First, previous time-scale developments are reviewed and then some new methods that result in enhanced performance are presented. The historical perspective is based upon several time scales: the AT1 and TA time scales of the National Institute of Standards and Technology (NIST), the A.1(MEAN) time scale of the US Naval observatory (USNO), the TAI time scale of the Bureau International des Poids et Measures (BIPM), and the KAS-1 time scale of the Naval Research laboratory (NRL). The new method was incorporated in the KAS-2 time scale recently developed by Timing Solutions Corporation. The goal is to present time-scale concepts in a nonmathematical form with as few equations as possible. Many other papers and texts discuss the details of the optimal estimation techniques that may be used to implement these concepts.

  2. Cuba: Multidimensional numerical integration library

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  3. Numerical and measured data from the 3D salt canopy physical modeling project

    SciTech Connect

    Bradley, C.; House, L.; Fehler, M.; Pearson, J.; TenCate, J.; Wiley, R.

    1997-11-01

    The evolution of salt structures in the Gulf of Mexico have been shown to provide a mechanism for the trapping of significant hydrocarbon reserves. Most of these structures have complex geometries relative to the surrounding sedimentary layers. This aspect in addition to high velocities within the salt tend to scatter and defocus seismic energy and make imaging of subsalt lithology extremely difficult. An ongoing program the SEG/EAEG modeling project (Aminzadeh et al. 1994a: Aminzadeh et al. 1994b: Aminzadeh et al. 1995), and a follow-up project funded as part of the Advanced Computational Technology Initiative (ACTI) (House et al. 1996) have sought to investigate problems with imaging beneath complex salt structures using numerical modeling and more recently, construction of a physical model patterned after the numerical subsalt model (Wiley and McKnight. 1996). To date, no direct comparison of the numerical and physical aspects of these models has been attempted. We present the results of forward modeling a numerical realization of the 3D salt canopy physical model with the French Petroleum Institute (IFP) acoustic finite difference algorithm used in the numerical subsalt tests. We compare the results from the physical salt canopy model, the acoustic modeling of the physical/numerical model and the original numerical SEG/EAEG Salt Model. We will be testing the sensitivity of migration to the presence of converted shear waves and acquisition geometry.

  4. New formulations of monotonically convergent quantum control algorithms

    NASA Astrophysics Data System (ADS)

    Maday, Yvon; Turinici, Gabriel

    2003-05-01

    Most of the numerical simulation in quantum (bilinear) control have used one of the monotonically convergent algorithms of Krotov (introduced by Tannor et al.) or of Zhu and Rabitz. However, until now no explicit relationship has been revealed between the two algorithms in order to understand their common properties. Within this framework, we propose in this paper a unified formulation that comprises both algorithms and that extends to a new class of monotonically convergent algorithms. Numerical results show that the newly derived algorithms behave as well as (and sometimes better than) the well-known algorithms cited above.

  5. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  6. Numerical simulation of direct-contact evaporation of a drop rising in a hot, less volatile immiscible liquid of higher density -- Possibilities and limits of the SOLA-VOF/CSF algorithm

    SciTech Connect

    Wohak, M.G.; Beer, H.

    1998-05-08

    A contribution toward the full numerical simulation of direct-contact evaporation of a drop rising in a hot, immiscible and less volatile liquid of higher density is presented. Based on a fixed-grid Eulerian description, the classical SOLA-VOF method is largely extended to incorporate, for example, three incompressible fluids and liquid-vapor phase change. The thorough validation and assessment process covers several benchmark simulations, some which are presented, documenting the multipurpose value of the new code. The direct-contact evaporation simulations reveal severe numerical problems that are closely related to the fixed-grid Euler formulation. As a consequence, the comparison to experiments have to be limited to the initial stage. Potential applications using several design variations can be found in waste heat recovery and reactor cooling. Furthermore, direct contact evaporators may be used in such geothermal power plants where the brines cannot be directly fed into a turbine either because of a high salt load causing severe fouling and corrosion or because of low steam fraction.

  7. Derivative Free Gradient Projection Algorithms for Rotation

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2004-01-01

    A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…

  8. Numerical algebraic geometry: a new perspective on gauge and string theories

    NASA Astrophysics Data System (ADS)

    Mehta, Dhagash; He, Yang-Hui; Hauensteine, Jonathan D.

    2012-07-01

    There is a rich interplay between algebraic geometry and string and gauge theories which has been recently aided immensely by advances in computational algebra. However, symbolic (Gröbner) methods are severely limited by algorithmic issues such as exponential space complexity and being highly sequential. In this paper, we introduce a novel paradigm of numerical algebraic geometry which in a plethora of situations overcomes these shortcomings. The so-called `embarrassing parallelizability' allows us to solve many problems and extract physical information which elude symbolic methods. We describe the method and then use it to solve various problems arising from physics which could not be otherwise solved.

  9. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  10. Transonic rotor tip design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Tauber, Michael E.; Langhi, Ronald G.

    1985-01-01

    The aerodynamic design procedure for a new blade tip suitable for operation at transonic speeds is illustrated. For the first time, 3 dimensional numerical optimization was applied to rotor tip design, using the recent derivative of the ROT22 code, program R22OPT. Program R22OPT utilized an efficient quasi-Newton optimization algorithm. Multiple design objectives were specified. The delocalization of the shock wave was to be eliminated in forward flight for an advance ratio of 0.41 and a tip Mach number of 0.92 at psi = 90 deg. Simultaneously, it was sought to reduce torque requirements while maintaining effective restoring pitching moments. Only the outer 10 percent of the blade span was modified; the blade area was not to be reduced by more than 3 percent. The goal was to combine the advantages of both sweptback and sweptforward blade tips. A planform that featured inboard sweepback was combined with a sweptforward tip and a taper ratio of 0.5. Initially, the ROT22 code was used to find by trial and error a planform geometry which met the design goals. This configuration had an inboard section with a leading edge sweep of 20 deg and a tip section swept forward at 25 deg; in addition, the airfoils were modified.

  11. Advancing MODFLOW Applying the Derived Vector Space Method

    NASA Astrophysics Data System (ADS)

    Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.

    2015-12-01

    The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  12. Numerical simulation of evaporating liquid jet in crossflow

    NASA Astrophysics Data System (ADS)

    Soteriou, Marios; Li, Xiaoyi

    2014-11-01

    Atomization of liquid fuel jets by cross-flowing air is critical to combustor performance. Ability to experimentally probe the fundamentals of this multiscale two phase flows has been hampered by limitations in experimental techniques and the challenges posed by operating conditions. Direct numerical simulation has recently emerged as a promising alternative due to advances in computer hardware and numerical methods. Using this approach, we recently demonstrated the ability to reproduce the physics of atomization of a liquid jet in cross-flow (LJIC) under ambient conditions. In this work we consider this flow in a high temperature environment. The inclusion of evaporation is the major new element. The numerical approach employs the CLSVOF method to capture the liquid-gas interface. Interface evaporation is solved directly with proper treatment of interface conditions and reproduces the relevant species/temperature fields there. A Lagrangian droplet tracking approach is used for the small droplets which are transferred from the Eulerian phase and evaporate using a traditional d2 law model. Other key algorithms of the massively parallelized solver include a ghost fluid method, a multi-grid preconditioned conjugate gradient approach and an adaptive mesh refinement technique. The overall method is verified using canonical problems. Simulations of evaporating LJIC point to the significant effect that evaporation has on the evolution of this flow and elucidate the downstream fuel species patterns.

  13. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  14. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  15. A Comparison of Two Skip Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Rea, Jeremy R.; Putnam, Zachary R.

    2007-01-01

    The Orion capsule vehicle will have a Lift-to-Drag ratio (L/D) of 0.3-0.35. For an Apollo-like direct entry into the Earth's atmosphere from a lunar return trajectory, this L/D will give the vehicle a maximum range of about 2500 nm and a maximum crossrange of 216 nm. In order to y longer ranges, the vehicle lift must be used to loft the trajectory such that the aerodynamic forces are decreased. A Skip-Trajectory results if the vehicle leaves the sensible atmosphere and a second entry occurs downrange of the atmospheric exit point. The Orion capsule is required to have landing site access (either on land or in water) inside the Continental United States (CONUS) for lunar returns anytime during the lunar month. This requirement means the vehicle must be capable of flying ranges of at least 5500 nm. For the L/D of the vehicle, this is only possible with the use of a guided Skip-Trajectory. A skip entry guidance algorithm is necessary to achieve this requirement. Two skip entry guidance algorithms have been developed: the Numerical Skip Entry Guidance (NSEG) algorithm was developed at NASA/JSC and PredGuid was developed at Draper Laboratory. A comparison of these two algorithms will be presented in this paper. Each algorithm has been implemented in a high-fidelity, 6 degree-of-freedom simulation called the Advanced NASA Technology Architecture for Exploration Studies (ANTARES). NASA and Draper engineers have completed several monte carlo analyses in order to compare the performance of each algorithm in various stress states. Each algorithm has been tested for entry-to-target ranges to include direct entries and skip entries of varying length. Dispersions have been included on the initial entry interface state, vehicle mass properties, vehicle aerodynamics, atmosphere, and Reaction Control System (RCS). Performance criteria include miss distance to the target, RCS fuel usage, maximum g-loads and heat rates for the first and second entry, total heat load, and control

  16. Numerical simulation of heat exchanger

    SciTech Connect

    Sha, W.T.

    1985-01-01

    Accurate and detailed knowledge of the fluid flow field and thermal distribution inside a heat exchanger becomes invaluable as a large, efficient, and reliable unit is sought. This information is needed to provide proper evaluation of the thermal and structural performance characteristics of a heat exchanger. It is to be noted that an analytical prediction method, when properly validated, will greatly reduce the need for model testing, facilitate interpolating and extrapolating test data, aid in optimizing heat-exchanger design and performance, and provide scaling capability. Thus tremendous savings of cost and time are realized. With the advent of large digital computers and advances in the development of computational fluid mechanics, it has become possible to predict analytically, through numerical solution, the conservation equations of mass, momentum, and energy for both the shellside and tubeside fluids. The numerical modeling technique will be a valuable, cost-effective design tool for development of advanced heat exchangers.

  17. Rapid implementation of advanced constitutive models

    NASA Astrophysics Data System (ADS)

    Starman, Bojan; Halilovič, Miroslav; Vrh, Marko; Štok, Boris

    2013-12-01

    This paper presents a methodology based on the NICE integration scheme [1, 2] for simple and rapid numerical implementation of a class of plasticity constitutive models. In this regard, an algorithm is purposely developed for the implementation of newly developed advanced constitutive models into explicit finite element framework. The methodology follows the organization of the problem state variables into an extended form, which allows the constitutive models' equations to be organized in such a way, that the algorithm can be optionally extended with minimal effort to integrate also evolution equations related to a description of other specific phenomena, such as damage, distortional hardening, phase transitions, degradation etc. To confirm simplicity of the program implementation, computational robustness, effectiveness and improved accuracy of the implemented integration algorithm, a deep drawing simulation of the cylindrical cup is considered as the case study, performed in ABAQUS/Explicit. As a fairly complex considered model, the YLD2004-18p model [3, 4] is first implemented via external subroutine VUMAT. Further, to give additional proof of the simplicity of the proposed methodology, a combination of the YLD2004-18p model and Gurson-Tvergaard-Needleman model (GTN) is considered. As demonstrated, the implementation is really obtained in a very simple way.

  18. Numerical taxonomy on data: Experimental results

    SciTech Connect

    Cohen, J.; Farach, M.

    1997-12-01

    The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.

  19. Propagation of numerical noise in particle-in-cell tracking

    NASA Astrophysics Data System (ADS)

    Kesting, Frederik; Franchetti, Giuliano

    2015-11-01

    Particle-in-cell (PIC) is the most used algorithm to perform self-consistent tracking of intense charged particle beams. It is based on depositing macroparticles on a grid, and subsequently solving on it the Poisson equation. It is well known that PIC algorithms occupy intrinsic limitations as they introduce numerical noise. Although not significant for short-term tracking, this becomes important in simulations for circular machines over millions of turns as it may induce artificial diffusion of the beam. In this work, we present a modeling of numerical noise induced by PIC algorithms, and discuss its influence on particle dynamics. The combined effect of particle tracking and noise created by PIC algorithms leads to correlated or decorrelated numerical noise. For decorrelated numerical noise we derive a scaling law for the simulation parameters, allowing an estimate of artificial emittance growth. Lastly, the effect of correlated numerical noise is discussed, and a mitigation strategy is proposed.

  20. Numerical Methods for Forward and Inverse Problems in Discontinuous Media

    SciTech Connect

    Chartier, Timothy P.

    2011-03-08

    The research emphasis under this grant's funding is in the area of algebraic multigrid methods. The research has two main branches: 1) exploring interdisciplinary applications in which algebraic multigrid can make an impact and 2) extending the scope of algebraic multigrid methods with algorithmic improvements that are based in strong analysis.The work in interdisciplinary applications falls primarily in the field of biomedical imaging. Work under this grant demonstrated the effectiveness and robustness of multigrid for solving linear systems that result from highly heterogeneous finite element method models of the human head. The results in this work also give promise to medical advances possible with software that may be developed. Research to extend the scope of algebraic multigrid has been focused in several areas. In collaboration with researchers at the University of Colorado, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, the PI developed an adaptive multigrid with subcycling via complementary grids. This method has very cheap computing costs per iterate and is showing promise as a preconditioner for conjugate gradient. Recent work with Los Alamos National Laboratory concentrates on developing algorithms that take advantage of the recent advances in adaptive multigrid research. The results of the various efforts in this research could ultimately have direct use and impact to researchers for a wide variety of applications, including, astrophysics, neuroscience, contaminant transport in porous media, bi-domain heart modeling, modeling of tumor growth, and flow in heterogeneous porous media. This work has already led to basic advances in computational mathematics and numerical linear algebra and will continue to do so into the future.

  1. In Praise of Numerical Computation

    NASA Astrophysics Data System (ADS)

    Yap, Chee K.

    Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.

  2. Numerical vorticity creation based on impulse conservation.

    PubMed Central

    Summers, D M; Chorin, A J

    1996-01-01

    The problem of creating solenoidal vortex elements to satisfy no-slip boundary conditions in Lagrangian numerical vortex methods is solved through the use of impulse elements at walls and their subsequent conversion to vortex loops. The algorithm is not uniquely defined, due to the gauge freedom in the definition of impulse; the numerically optimal choice of gauge remains to be determined. Two different choices are discussed, and an application to flow past a sphere is sketched. PMID:11607636

  3. An algorithm for the automatic synchronization of Omega receivers

    NASA Technical Reports Server (NTRS)

    Stonestreet, W. M.; Marzetta, T. L.

    1977-01-01

    The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.

  4. Process simulation for advanced composites production

    SciTech Connect

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coating techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.

  5. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  6. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    SciTech Connect

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  7. A robust algorithm for the contact of viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Spinu, S.; Cerlinca, D.

    2016-08-01

    Existing solutions for the contact problem involving viscoelastic materials often require numerical differentiation and integration, as well as resolution of transcendental equations, which can raise convergence issues. The algorithm advanced in this paper can tackle the contact behaviour of the viscoelastic materials without any convergence problems, for arbitrary contact geometry, arbitrary loading programs and complex constitutive models of linear viscoelasticity. An updated algorithm for the elastic frictionless contact, coupled with a semi-analytical method for the computation of viscoelastic displacement, is employed to solve the viscoelastic contact problem at a series of small time increments. The number of equations in the linear system resulting from the geometrical condition of deformation is set by the number of cells in the contact area, which is a priori unknown. A trial-and-error approach is implemented, resulting in a series of linear systems which are solved on evolving contact areas, until static equilibrium equations and complementarity conditions are fully satisfied for every cell in the computational domain. At any iteration, cells with negative pressure are excluded from the contact area, while cells with negative gap (i.e. cells where the contacting bodies are predicted to overlap) are reincluded. The solution is found when pressure is stabilized in relation to the imposed normal load. This robust algorithm is expected to solve a large variety of contact problems involving viscoelastic materials.

  8. The Texas Children's Medication Algorithm Project: Revision of the Algorithm for Pharmacotherapy of Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly

    2006-01-01

    Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…

  9. Integration of artificial intelligence and numerical optimization techniques for the design of complex aerospace systems

    SciTech Connect

    Tong, S.S.; Powell, D.; Goel, S. GE Consulting Services, Albany, NY )

    1992-02-01

    A new software system called Engineous combines artificial intelligence and numerical methods for the design and optimization of complex aerospace systems. Engineous combines the advanced computational techniques of genetic algorithms, expert systems, and object-oriented programming with the conventional methods of numerical optimization and simulated annealing to create a design optimization environment that can be applied to computational models in various disciplines. Engineous has produced designs with higher predicted performance gains that current manual design processes - on average a 10-to-1 reduction of turnaround time - and has yielded new insights into product design. It has been applied to the aerodynamic preliminary design of an aircraft engine turbine, concurrent aerodynamic and mechanical preliminary design of an aircraft engine turbine blade and disk, a space superconductor generator, a satellite power converter, and a nuclear-powered satellite reactor and shield. 23 refs.

  10. Advanced rotorcraft control using parameter optimization

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters is presented. The algorithm is part of a design algorithm for an optimal linear dynamic output feedback controller that minimizes a finite time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed loop eigensystem. This approach through the use of a accurate Pade series approximation does not require the closed loop system matrix to be diagonalizable. The algorithm has been included in a control design package for optimal robust low order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed loop system under an arbitrary controller design initialization and during the numerical search.

  11. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  12. On solving the 3-D phase field equations by employing a parallel-adaptive mesh refinement (Para-AMR) algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Xiong, S. M.

    2015-05-01

    An algorithm comprising adaptive mesh refinement (AMR) and parallel (Para-) computing capabilities was developed to efficiently solve the coupled phase field equations in 3-D. The AMR was achieved based on a gradient criterion and the point clustering algorithm introduced by Berger (1991). To reduce the time for mesh generation, a dynamic regridding approach was developed based on the magnitude of the maximum phase advancing velocity. Local data at each computing process was then constructed and parallel computation was realized based on the hierarchical grid structure created during the AMR. Numerical tests and simulations on single and multi-dendrite growth were performed and results show that the proposed algorithm could shorten the computing time for 3-D phase field simulation for about two orders of magnitude and enable one to gain much more insight in understanding the underlying physics during dendrite growth in solidification.

  13. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  14. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  15. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  16. Fast training algorithms for multilayer neural nets.

    PubMed

    Brent, R P

    1991-01-01

    An algorithm that is faster than back-propagation and for which it is not necessary to specify the number of hidden units in advance is described. The relationship with other fast pattern-recognition algorithms, such as algorithms based on k-d trees, is discussed. The algorithm has been implemented and tested on artificial problems, such as the parity problem, and on real problems arising in speech recognition. Experimental results, including training times and recognition accuracy, are given. Generally, the algorithm achieves accuracy as good as or better than nets trained using back-propagation. Accuracy is comparable to that for the nearest-neighbor algorithm, which is slower and requires more storage space.

  17. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  18. Numerical methods: Analytical benchmarking in transport theory

    SciTech Connect

    Ganapol, B.D. )

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered.

  19. PyTrilinos: Recent Advances in the Python Interface to Trilinos

    SciTech Connect

    Spotz, William F.

    2012-01-01

    PyTrilinos is a set of Python interfaces to compiled Trilinos packages. This collection supports serial and parallel dense linear algebra, serial and parallel sparse linear algebra, direct and iterative linear solution techniques, algebraic and multilevel preconditioners, nonlinear solvers and continuation algorithms, eigensolvers and partitioning algorithms. Also included are a variety of related utility functions and classes, including distributed I/O, coloring algorithms and matrix generation. PyTrilinos vector objects are compatible with the popular NumPy Python package. As a Python front end to compiled libraries, PyTrilinos takes advantage of the flexibility and ease of use of Python, and the efficiency of the underlying C++, C and Fortran numerical kernels. This paper covers recent, previously unpublished advances in the PyTrilinos package.

  20. Ten years of Nature Physics: Numerical models come of age

    NASA Astrophysics Data System (ADS)

    Gull, E.; Millis, A. J.

    2015-10-01

    When Nature Physics celebrated 20 years of high-temperature superconductors, numerical approaches were on the periphery. Since then, new ideas implemented in new algorithms are leading to new insights.

  1. A numerical method to study the dynamics of capillary fluid systems

    NASA Astrophysics Data System (ADS)

    Herrada, M. A.; Montanero, J. M.

    2016-02-01

    We propose a numerical approach to study both the nonlinear dynamics and linear stability of capillary fluid systems. In the nonlinear analysis, the time-dependent fluid region is mapped onto a fixed numerical domain through a coordinate transformation. The hydrodynamic equations are spatially discretized with the Chebyshev spectral collocation technique, while an implicit time advancement is performed using second-order backward finite differences. The resulting algebraic equations are solved with the iterative Newton-Raphson technique. The most novel aspect of the method is the fact that the elements of the Jacobian of the discretized system of equations are symbolic functions calculated before running the simulation. These functions are evaluated numerically in the Newton-Raphson iterations to find the solution at each time step, which reduces considerably the computing time. Besides, this numerical procedure can be easily adapted to solve the eigenvalue problem which determines the linear global modes of the capillary system. Therefore, both the nonlinear dynamics and the linear stability analysis can be conducted with essentially the same algorithm. We validate this numerical approach by studying the dynamics of a liquid bridge close to its minimum volume stability limit. The results are virtually the same as those obtained with other methods. The proposed approach proves to be much more computationally efficient than those other methods. Finally, we show the versatility of the method by calculating the linear global modes of a gravitational jet.

  2. Implementation and evaluation of two helical CT reconstruction algorithms in CIVA

    NASA Astrophysics Data System (ADS)

    Banjak, H.; Costin, M.; Vienne, C.; Kaftandjian, V.

    2016-02-01

    The large majority of industrial CT systems reconstruct the 3D volume by using an acquisition on a circular trajec-tory. However, when inspecting long objects which are highly anisotropic, this scanning geometry creates severe artifacts in the reconstruction. For this reason, the use of an advanced CT scanning method like helical data acquisition is an efficient way to address this aspect known as the long-object problem. Recently, several analytically exact and quasi-exact inversion formulas for helical cone-beam reconstruction have been proposed. Among them, we identified two algorithms of interest for our case. These algorithms are exact and of filtered back-projection structure. In this work we implemented the filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms of Zou and Pan (2004). For performance evaluation, we present a numerical compari-son of the two selected algorithms with the helical FDK algorithm using both complete (noiseless and noisy) and truncated data generated by CIVA (the simulation platform for non-destructive testing techniques developed at CEA).

  3. An operator splitting algorithm for the three-dimensional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Khan, Liaqat Ali; Liu, Philip L.-F.

    1998-09-01

    Operator splitting algorithms are frequently used for solving the advection-diffusion equation, especially to deal with advection dominated transport problems. In this paper an operator splitting algorithm for the three-dimensional advection-diffusion equation is presented. The algorithm represents a second-order-accurate adaptation of the Holly and Preissmann scheme for three-dimensional problems. The governing equation is split into an advection equation and a diffusion equation, and they are solved by a backward method of characteristics and a finite element method, respectively. The Hermite interpolation function is used for interpolation of concentration in the advection step. The spatial gradients of concentration in the Hermite interpolation are obtained by solving equations for concentration gradients in the advection step. To make the composite algorithm efficient, only three equations for first-order concentration derivatives are solved in the diffusion step of computation. The higher-order spatial concentration gradients, necessary to advance the solution in a computational cycle, are obtained by numerical differentiations based on the available information. The simulation characteristics and accuracy of the proposed algorithm are demonstrated by several advection dominated transport problems.

  4. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  5. FBP Algorithms for Attenuated Fan-Beam Projections

    PubMed Central

    You, Jiangsheng; Zeng, Gengsheng L.; Liang, Zhengrong

    2005-01-01

    A filtered backprojection (FBP) reconstruction algorithm for attenuated fan-beam projections has been derived based on Novikov’s inversion formula. The derivation uses a common transformation between parallel-beam and fan-beam coordinates. The filtering is shift-invariant. Numerical evaluation of the FBP algorithm is presented as well. As a special application, we also present a shift-invariant FBP algorithm for fan-beam SPECT reconstruction with uniform attenuation compensation. Several other fan-beam reconstruction algorithms are also discussed. In the attenuation-free case, our algorithm reduces to the conventional fan-beam FBP reconstruction algorithm. PMID:16570111

  6. NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.

    SciTech Connect

    LUCCIO, A.; D'IMPERIO, N.; MALITSKY, N.

    2005-09-12

    Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.

  7. Numerical recipes for mold filling simulation

    SciTech Connect

    Kothe, D.; Juric, D.; Lam, K.; Lally, B.

    1998-07-01

    Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.

  8. Fast proximity algorithm for MAP ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng

    2012-03-01

    We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.

  9. Alternating-direction implicit numerical solution of the time-dependent, three-dimensional, single fluid, resistive magnetohydrodynamic equations

    SciTech Connect

    Finan, C.H. III

    1980-12-01

    Resistive magnetohydrodynamics (MHD) is described by a set of eight coupled, nonlinear, three-dimensional, time-dependent, partial differential equations. A computer code, IMP (Implicit MHD Program), has been developed to solve these equations numerically by the method of finite differences on an Eulerian mesh. In this model, the equations are expressed in orthogonal curvilinear coordinates, making the code applicable to a variety of coordinate systems. The Douglas-Gunn algorithm for Alternating-Direction Implicit (ADI) temporal advancement is used to avoid the limitations in timestep size imposed by explicit methods. The equations are solved simultaneously to avoid syncronization errors.

  10. PDE-based Morphology for Matrix Fields: Numerical Solution Schemes

    NASA Astrophysics Data System (ADS)

    Burgeth, Bernhard; Breuß, Michael; Didas, Stephan; Weickert, Joachim

    Tensor fields are important in digital imaging and computer vision. Hence there is a demand for morphological operations to perform e.g. shape analysis, segmentation or enhancement procedures. Recently, fundamental morphological concepts have been transferred to the setting of fields of symmetric positive definite matrices, which are symmetric rank two tensors. This has been achieved by a matrix-valued extension of the nonlinear morphological partial differential equations (PDEs) for dilation and erosion known for grey scale images. Having these two basic operations at our disposal, more advanced morphological operators such as top hats or morphological derivatives for matrix fields with symmetric, positive semidefinite matrices can be constructed. The approach realises a proper coupling of the matrix channels rather than treating them independently. However, from the algorithmic side the usual scalar morphological PDEs are transport equations that require special upwind-schemes or novel high-accuracy predictor-corrector approaches for their adequate numerical treatment. In this chapter we propose the non-trivial extension of these schemes to the matrix-valued setting by exploiting the special algebraic structure available for symmetric matrices. Furthermore we compare the performance and juxtapose the results of these novel matrix-valued high-resolution-type (HRT) numerical schemes by considering top hats and morphological derivatives applied to artificial and real world data sets.

  11. Efficient simultaneous image deconvolution and upsampling algorithm for low-resolution microwave sounder data

    NASA Astrophysics Data System (ADS)

    Qin, Jing; Yanovsky, Igor; Yin, Wotao

    2015-01-01

    Microwave imaging has been widely used in the prediction and tracking of hurricanes, typhoons, and tropical storms. Due to the limitations of sensors, the acquired remote sensing data are usually blurry and have relatively low resolution, which calls for the development of fast algorithms for deblurring and enhancing the resolution. We propose an efficient algorithm for simultaneous image deconvolution and upsampling for low-resolution microwave hurricane data. Our model involves convolution, downsampling, and the total variation regularization. After reformulating the model, we are able to apply the alternating direction method of multipliers and obtain three subproblems, each of which has a closed-form solution. We also extend the framework to the multichannel case with the multichannel total variation regularization. A variety of numerical experiments on synthetic and real Advanced Microwave Sounding Unit and Microwave Humidity Sounder data were conducted. The results demonstrate the outstanding performance of the proposed method.

  12. Advances in Doppler OCT

    PubMed Central

    Liu, Gangjun; Chen, Zhongping

    2014-01-01

    We review the principle and some recent applications of Doppler optical coherence tomography (OCT). The advances of the phase-resolved Doppler OCT method are described. Functional OCT algorithms which are based on an extension of the phase-resolved scheme are also introduced. Recent applications of Doppler OCT for quantification of flow, imaging of microvasculature and vocal fold vibration, and optical coherence elastography are briefly discussed. PMID:24443649

  13. The quiet revolution of numerical weather prediction.

    PubMed

    Bauer, Peter; Thorpe, Alan; Brunet, Gilbert

    2015-09-01

    Advances in numerical weather prediction represent a quiet revolution because they have resulted from a steady accumulation of scientific knowledge and technological advances over many years that, with only a few exceptions, have not been associated with the aura of fundamental physics breakthroughs. Nonetheless, the impact of numerical weather prediction is among the greatest of any area of physical science. As a computational problem, global weather prediction is comparable to the simulation of the human brain and of the evolution of the early Universe, and it is performed every day at major operational centres across the world.

  14. Relative performance of algorithms for autonomous satellite orbit determination

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Peters, J. G.; Schutz, B. E.

    1981-01-01

    Limited word size in contemporary microprocessors causes numerical problems in autonomous satellite navigation applications. Numerical error introduced in navigation computations performed on small wordlength machines can cause divergence of sequential estimation algorithms. To insure filter reliability, square root algorithms have been adopted in many applications. The optimal navigation algorithm requires a careful match of the estimation algorithm, dynamic model, and numerical integrator. In this investigation, the relationship of several square root filters and numerical integration methods is evaluated to determine their relative performance for satellite navigation applications. The numerical simulations are conducted using the Phase I GPS constellation to determine the orbit of a LANDSAT-D type satellite. The primary comparison is based on computation time and relative estimation accuracy.

  15. Grover's algorithm and the secant varieties

    NASA Astrophysics Data System (ADS)

    Holweck, Frédéric; Jaffali, Hamza; Nounouh, Ismaël

    2016-09-01

    In this paper we investigate the entanglement nature of quantum states generated by Grover's search algorithm by means of algebraic geometry. More precisely we establish a link between entanglement of states generated by the algorithm and auxiliary algebraic varieties built from the set of separable states. This new perspective enables us to propose qualitative interpretations of earlier numerical results obtained by M. Rossi et al. We also illustrate our purpose with a couple of examples investigated in details.

  16. Algorithm for in-flight gyroscope calibration

    NASA Technical Reports Server (NTRS)

    Davenport, P. B.; Welter, G. L.

    1988-01-01

    An optimal algorithm for the in-flight calibration of spacecraft gyroscope systems is presented. Special consideration is given to the selection of the loss function weight matrix in situations in which the spacecraft attitude sensors provide significantly more accurate information in pitch and yaw than in roll, such as will be the case in the Hubble Space Telescope mission. The results of numerical tests that verify the accuracy of the algorithm are discussed.

  17. Supercomputers and biological sequence comparison algorithms.

    PubMed

    Core, N G; Edmiston, E W; Saltz, J H; Smith, R M

    1989-12-01

    Comparison of biological (DNA or protein) sequences provides insight into molecular structure, function, and homology and is increasingly important as the available databases become larger and more numerous. One method of increasing the speed of the calculations is to perform them in parallel. We present the results of initial investigations using two dynamic programming algorithms on the Intel iPSC hypercube and the Connection Machine as well as an inexpensive, heuristically-based algorithm on the Encore Multimax.

  18. Numerical algorithms for finite element computations on concurrent processors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    The work of several graduate students which relate to the NASA grant is briefly summarized. One student has worked on a detailed analysis of the so-called ijk forms of Gaussian elemination and Cholesky factorization on concurrent processors. Another student has worked on the vectorization of the incomplete Cholesky conjugate method on the CYBER 205. Two more students implemented various versions of Gaussian elimination and Cholesky factorization on the FLEX/32.

  19. Numerical algorithms for finite element computations on arrays of microprocessors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1981-01-01

    The development of a multicolored successive over relaxation (SOR) program for the finite element machine is discussed. The multicolored SOR method uses a generalization of the classical Red/Black grid point ordering for the SOR method. These multicolored orderings have the advantage of allowing the SOR method to be implemented as a Jacobi method, which is ideal for arrays of processors, but still enjoy the greater rate of convergence of the SOR method. The program solves a general second order self adjoint elliptic problem on a square region with Dirichlet boundary conditions, discretized by quadratic elements on triangular regions. For this general problem and discretization, six colors are necessary for the multicolored method to operate efficiently. The specific problem that was solved using the six color program was Poisson's equation; for Poisson's equation, three colors are necessary but six may be used. In general, the number of colors needed is a function of the differential equation, the region and boundary conditions, and the particular finite element used for the discretization.

  20. Evaluating numerical ODE/DAE methods, algorithms and software

    NASA Astrophysics Data System (ADS)

    Soderlind, Gustaf; Wang, Lina

    2006-01-01

    Until recently, the testing of ODE/DAE software has been limited to simple comparisons and benchmarking. The process of developing software from a mathematically specified method is complex: it entails constructing control structures and objectives, selecting iterative methods and termination criteria, choosing norms and many more decisions. Most software constructors have taken a heuristic approach to these design choices, and as a consequence two different implementations of the same method may show significant differences in performance. Yet it is common to try to deduce from software comparisons that one method is better than another. Such conclusions are not warranted, however, unless the testing is carried out under true ceteris paribus conditions. Moreover, testing is an empirical science and as such requires a formal test protocol; without it conclusions are questionable, invalid or even false.We argue that ODE/DAE software can be constructed and analyzed by proven, "standard" scientific techniques instead of heuristics. The goals are computational stability, reproducibility, and improved software quality. We also focus on different error criteria and norms, and discuss modifications to DASPK and RADAU5. Finally, some basic principles of a test protocol are outlined and applied to testing these codes on a variety of problems.

  1. Extremal polynomials and methods of optimization of numerical algorithms

    SciTech Connect

    Lebedev, V I

    2004-10-31

    Chebyshev-Markov-Bernstein-Szegoe polynomials C{sub n}(x) extremal on [-1,1] with weight functions w(x)=(1+x){sup {alpha}}(1- x){sup {beta}}/{radical}(S{sub l}(x)) where {alpha},{beta}=0,1/2 and S{sub l}(x)={pi}{sub k=1}{sup m}(1-c{sub k}T{sub l{sub k}}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w{sup 2}(x)(1-x{sup 2}){sup -1/2}. The parameters of optimal Chebyshev iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshev filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

  2. Lagrangian model of zooplankton dispersion: numerical schemes comparisons and parameter sensitivity tests

    NASA Astrophysics Data System (ADS)

    Qiu, Zhongfeng; Doglioli, Andrea M.; He, Yijun; Carlotti, Francois

    2011-03-01

    This paper presents two comparisons or tests for a Lagrangian model of zooplankton dispersion: numerical schemes and time steps. Firstly, we compared three numerical schemes using idealized circulations. Results show that the precisions of the advanced Adams-Bashfold-Moulton (ABM) method and the Runge-Kutta (RK) method were in the same order and both were much higher than that of the Euler method. Furthermore, the advanced ABM method is more efficient than the RK method in computational memory requirements and time consumption. We therefore chose the advanced ABM method as the Lagrangian particle-tracking algorithm. Secondly, we performed a sensitivity test for time steps, using outputs of the hydrodynamic model, Symphonie. Results show that the time step choices depend on the fluid response time that is related to the spatial resolution of velocity fields. The method introduced by Oliveira et al. in 2002 is suitable for choosing time steps of Lagrangian particle-tracking models, at least when only considering advection.

  3. Introduction to systolic algorithms and architectures

    SciTech Connect

    Bentley, J.L.; Kung, H.T.

    1983-01-01

    The authors survey the class of systolic special-purpose computer architectures and algorithms, which are particularly well-suited for implementation in very large scale integrated circuitry (VLSI). They give a brief introduction to systolic arrays for a reader with a broad technical background and some experience in using a computer, but who is not necessarily a computer scientist. In addition they briefly survey the technological advances in VLSI that led to the development of systolic algorithms and architectures. 38 references.

  4. Intelligent perturbation algorithms to space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    The limited availability and high cost of crew time and scarce resources make optimization of space operations critical. Advances in computer technology coupled with new iterative search techniques permit the near optimization of complex scheduling problems that were previously considered computationally intractable. Described here is a class of search techniques called Intelligent Perturbation Algorithms. Several scheduling systems which use these algorithms to optimize the scheduling of space crew, payload, and resource operations are also discussed.

  5. The Numerical Tokamak Project (NTP) simulation of turbulent transport in the core plasma: A grand challenge in plasma physics

    SciTech Connect

    Not Available

    1993-12-01

    The long-range goal of the Numerical Tokamak Project (NTP) is the reliable prediction of tokamak performance using physics-based numerical tools describing tokamak physics. The NTP is accomplishing the development of the most advanced particle and extended fluid model`s on massively parallel processing (MPP) environments as part of a multi-institutional, multi-disciplinary numerical study of tokamak core fluctuations. The NTP is a continuing focus of the Office of Fusion Energy`s theory and computation program. Near-term HPCC work concentrates on developing a predictive numerical description of the core plasma transport in tokamaks driven by low-frequency collective fluctuations. This work addresses one of the greatest intellectual challenges to our understanding of the physics of tokamak performance and needs the most advanced computational resources to progress. We are conducting detailed comparisons of kinetic and fluid numerical models of tokamak turbulence. These comparisons are stimulating the improvement of each and the development of hybrid models which embody aspects of both. The combination of emerging massively parallel processing hardware and algorithmic improvements will result in an estimated 10**2--10**6 performance increase. Development of information processing and visualization tools is accelerating our comparison of computational models to one another, to experimental data, and to analytical theory, providing a bootstrap effect in our understanding of the target physics. The measure of success is the degree to which the experimentally observed scaling of fluctuation-driven transport may be predicted numerically. The NTP is advancing the HPCC Initiative through its state-of-the-art computational work. We are pushing the capability of high performance computing through our efforts which are strongly leveraged by OFE support.

  6. Descendants and advance directives.

    PubMed

    Buford, Christopher

    2014-01-01

    Some of the concerns that have been raised in connection to the use of advance directives are of the epistemic variety. Such concerns highlight the possibility that adhering to an advance directive may conflict with what the author of the directive actually wants (or would want) at the time of treatment. However, at least one objection to the employment of advance directives is metaphysical in nature. The objection to be discussed here, first formulated by Rebecca Dresser and labeled by Allen Buchanan as the slavery argument and David DeGrazia the someone else problem, aims to undermine the legitimacy of certain uses of advance directives by concluding that such uses rest upon an incorrect assumption about the identity over time of those ostensibly governed by the directives. There have been numerous attempts to respond to this objection. This paper aims to assess two strategies that have been pursued to cope with the problem.

  7. Descendants and advance directives.

    PubMed

    Buford, Christopher

    2014-01-01

    Some of the concerns that have been raised in connection to the use of advance directives are of the epistemic variety. Such concerns highlight the possibility that adhering to an advance directive may conflict with what the author of the directive actually wants (or would want) at the time of treatment. However, at least one objection to the employment of advance directives is metaphysical in nature. The objection to be discussed here, first formulated by Rebecca Dresser and labeled by Allen Buchanan as the slavery argument and David DeGrazia the someone else problem, aims to undermine the legitimacy of certain uses of advance directives by concluding that such uses rest upon an incorrect assumption about the identity over time of those ostensibly governed by the directives. There have been numerous attempts to respond to this objection. This paper aims to assess two strategies that have been pursued to cope with the problem. PMID:25743056

  8. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  9. A numerical scheme for ionizing shock waves

    SciTech Connect

    Aslan, Necdet . E-mail: naslan@yeditepe.edu.tr; Mond, Michael

    2005-12-10

    A two-dimensional (2D) visual computer code to solve the steady state (SS) or transient shock problems including partially ionizing plasma is presented. Since the flows considered are hypersonic and the resulting temperatures are high, the plasma is partially ionized. Hence the plasma constituents are electrons, ions and neutral atoms. It is assumed that all the above species are in thermal equilibrium, namely, that they all have the same temperature. The ionization degree is calculated from Saha equation as a function of electron density and pressure by means of a nonlinear Newton type root finding algorithms. The code utilizes a wave model and numerical fluctuation distribution (FD) scheme that runs on structured or unstructured triangular meshes. This scheme is based on evaluating the mesh averaged fluctuations arising from a number of waves and distributing them to the nodes of these meshes in an upwind manner. The physical properties (directions, strengths, etc.) of these wave patterns are obtained by a new wave model: ION-A developed from the eigen-system of the flux Jacobian matrices. Since the equation of state (EOS) which is used to close up the conservation laws includes electronic effects, it is a nonlinear function and it must be inverted by iterations to determine the ionization degree as a function of density and temperature. For the time advancement, the scheme utilizes a multi-stage Runge-Kutta (RK) algorithm with time steps carefully evaluated from the maximum possible propagation speed in the solution domain. The code runs interactively with the user and allows to create different meshes to use different initial and boundary conditions and to see changes of desired physical quantities in the form of color and vector graphics. The details of the visual properties of the code has been published before (see [N. Aslan, A visual fluctuation splitting scheme for magneto-hydrodynamics with a new sonic fix and Euler limit, J. Comput. Phys. 197 (2004) 1

  10. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  11. An analysis of a new stable partitioned algorithm for FSI problems. Part I: Incompressible flow and elastic solids

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    2014-07-01

    Stable partitioned algorithms for fluid-structure interaction (FSI) problems are developed and analyzed in this two-part paper. Part I describes an algorithm for incompressible flow coupled with compressible elastic solids, while Part II discusses an algorithm for incompressible flow coupled with structural shells. Importantly, these new added-mass partitioned (AMP) schemes are stable and retain full accuracy with no sub-iterations per time step, even in the presence of strong added-mass effects (e.g. for light solids). The numerical approach described here for bulk compressible solids extends the scheme of Banks et al. [1,2] for inviscid compressible flow, and uses Robin (mixed) boundary conditions with the fluid and solid solvers at the interface. The basic AMP Robin conditions, involving a linear combination of velocity and stress, are determined from the outgoing solid characteristic relation normal to the fluid-solid interface combined with the matching conditions on the velocity and traction. Two alternative forms of the AMP conditions are then derived depending on whether the fluid equations are advanced with a fractional-step method or not. The stability and accuracy of the AMP algorithm is evaluated for linearized FSI model problems; the full nonlinear case being left for future consideration. A normal mode analysis is performed to show that the new AMP algorithm is stable for any ratio of the solid and fluid densities, including the case of very light solids when added-mass effects are large. In contrast, it is shown that a traditional partitioned algorithm involving a Dirichlet-Neumann coupling for the same FSI problem is formally unconditionally unstable for any ratio of densities. Exact traveling wave solutions are derived for the FSI model problems, and these solutions are used to verify the stability and accuracy of the corresponding numerical results obtained from the AMP algorithm for the cases of light, medium and heavy solids.

  12. Operational algorithm development and refinement approaches

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.

    2003-11-01

    Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that

  13. Advanced Eddy current NDE steam generator tubing.

    SciTech Connect

    Bakhtiari, S.

    1999-03-29

    As part of a multifaceted project on steam generator integrity funded by the U.S. Nuclear Regulatory Commission, Argonne National Laboratory is carrying out research on the reliability of nondestructive evaluation (NDE). A particular area of interest is the impact of advanced eddy current (EC) NDE technology. This paper presents an overview of work that supports this effort in the areas of numerical electromagnetic (EM) modeling, data analysis, signal processing, and visualization of EC inspection results. Finite-element modeling has been utilized to study conventional and emerging EC probe designs. This research is aimed at determining probe responses to flaw morphologies of current interest. Application of signal processing and automated data analysis algorithms has also been addressed. Efforts have focused on assessment of frequency and spatial domain filters and implementation of more effective data analysis and display methods. Data analysis studies have dealt with implementation of linear and nonlinear multivariate models to relate EC inspection parameters to steam generator tubing defect size and structural integrity. Various signal enhancement and visualization schemes are also being evaluated and will serve as integral parts of computer-aided data analysis algorithms. Results from this research will ultimately be substantiated through testing on laboratory-grown and in-service-degraded tubes.

  14. Waste glass melter numerical and physical modeling

    SciTech Connect

    Eyler, L.L.; Peters, R.D.; Lessor, D.L.; Lowery, P.S.; Elliott, M.L.

    1991-10-01

    Results of physical and numerical simulation modeling of high-level liquid waste vitrification melters are presented. Physical modeling uses simulant fluids in laboratory testing. Visualization results provide insight into convective melt flow patterns from which information is derived to support performance estimation of operating melters and data to support numerical simulation. Numerical simulation results of several melter configurations are presented. These are in support of programs to evaluate melter operation characteristics and performance. Included are investigations into power skewing and alternating current electric field phase angle in a dual electrode pair reference design and bi-modal convective stability in an advanced design. 9 refs., 9 figs., 1 tab.

  15. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  16. Numerical methods for portfolio selection with bounded constraints

    NASA Astrophysics Data System (ADS)

    Yin, G.; Jin, Hanqing; Jin, Zhuo

    2009-11-01

    This work develops an approximation procedure for portfolio selection with bounded constraints. Based on the Markov chain approximation techniques, numerical procedures are constructed for the utility optimization task. Under simple conditions, the convergence of the approximation sequences to the wealth process and the optimal utility function is established. Numerical examples are provided to illustrate the performance of the algorithms.

  17. Numerical simulation of steady supersonic flow. [spatial marching

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.; Steger, J. L.

    1981-01-01

    A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.

  18. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  19. Numerical simulation of "an American haboob"

    NASA Astrophysics Data System (ADS)

    Vukovic, A.; Vujadinovic, M.; Pejanovic, G.; Andric, J.; Kumjian, M. R.; Djurdjevic, V.; Dacic, M.; Prasad, A. K.; El-Askary, H. M.; Paris, B. C.; Petkovic, S.; Nickovic, S.; Sprigg, W. A.

    2014-04-01

    A dust storm of fearful proportions hit Phoenix in the early evening hours of 5 July 2011. This storm, an American haboob, was predicted hours in advance because numerical, land-atmosphere modeling, computing power and remote sensing of dust events have improved greatly over the past decade. High-resolution numerical models are required for accurate simulation of the small scales of the haboob process, with high velocity surface winds produced by strong convection and severe downbursts. Dust productive areas in this region consist mainly of agricultural fields, with soil surfaces disturbed by plowing and tracks of land in the high Sonoran Desert laid barren by ongoing draught. Model simulation of the 5 July 2011 dust storm uses the coupled atmospheric-dust model NMME-DREAM (Non-hydrostatic Mesoscale Model on E grid, Janjic et al., 2001; Dust REgional Atmospheric Model, Nickovic et al., 2001; Pérez et al., 2006) with 4 km horizontal resolution. A mask of the potentially dust productive regions is obtained from the land cover and the normalized difference vegetation index (NDVI) data from the Moderate Resolution Imaging Spectroradiometer (MODIS). The scope of this paper is validation of the dust model performance, and not use of the model as a tool to investigate mechanisms related to the storm. Results demonstrate the potential technical capacity and availability of the relevant data to build an operational system for dust storm forecasting as a part of a warning system. Model results are compared with radar and other satellite-based images and surface meteorological and PM10 observations. The atmospheric model successfully hindcasted the position of the front in space and time, with about 1 h late arrival in Phoenix. The dust model predicted the rapid uptake of dust and high values of dust concentration in the ensuing storm. South of Phoenix, over the closest source regions (~25 km), the model PM10 surface dust concentration reached ~2500 μg m-3, but

  20. Numerical simulation of "An American Haboob"

    NASA Astrophysics Data System (ADS)

    Vukovic, A.; Vujadinovic, M.; Pejanovic, G.; Andric, J.; Kumjian, M. R.; Djurdjevic, V.; Dacic, M.; Prasad, A. K.; El-Askary, H. M.; Paris, B. C.; Petkovic, S.; Nickovic, S.; Sprigg, W. A.

    2013-10-01

    A dust storm of fearful proportions hit Phoenix in the early evening hours of 5 July 2011. This storm, an American haboob, was predicted hours in advance because numerical, land-atmosphere modeling, computing power and remote sensing of dust events have improved greatly over the past decade. High resolution numerical models are required for accurate simulation of the small-scales of the haboob process, with high velocity surface winds produced by strong convection and severe downbursts. Dust productive areas in this region consist mainly of agricultural fields, with soil surfaces disturbed by plowing and tracks of land in the high Sonoran desert laid barren by ongoing draught. Model simulation of the 5 July 2011 dust storm uses the coupled atmospheric-dust model NMME-DREAM with 3.5 km horizontal resolution. A mask of the potentially dust productive regions is obtained from the land cover and the Normalized Difference Vegetation Index (NDVI) data from the Moderate Resolution Imaging Spectroradiometer (MODIS). Model results are compared with radar and other satellite-based images and surface meteorological and PM10 observations. The atmospheric model successfully hindcasted the position of the front in space and time, with about 1 h late arrival in Phoenix. The dust model predicted the rapid uptake of dust and high values of dust concentration in the ensuing storm. South of Phoenix, over the closest source regions (~ 25 km), the model PM10 surface dust concentration reached ~ 2500 μg m-3, but underestimated the values measured by the PM10stations within the city. Model results are also validated by the MODIS aerosol optical depth (AOD), employing deep blue (DB) algorithms for aerosol loadings. Model validation included Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO), equipped with the lidar instrument, to disclose the vertical structure of dust aerosols as well as aerosol subtypes. Promising results encourage further research and

  1. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  2. Intelligent perturbation algorithms for space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.

  3. Advanced servomanipulator development

    SciTech Connect

    Kuban, D.P.

    1985-01-01

    The Advanced Servomanipulator (ASM) System consists of three major components: the ASM slave, the dual arm master controller (DAMC) or master, and the control system. The ASM is remotely maintainable force-reflecting servomanipulator developed at the Oak Ridge National Laboratory (ORNL) as part of the Consolidated Fuel Reprocessing Program. This new manipulator addresses requirements of advanced nuclear fuel reprocessing with emphasis on force reflection, remote maintainability, reliability, radiation tolerance, and corrosion resistance. The advanced servomanipulator is uniquely subdivided into remotely replaceable modules which will permit in situ manipulator repair by spare module replacement. Manipulator modularization and increased reliability are accomplished through a force transmission system that uses gears and torque tubes. Digital control algorithms and mechanical precision are used to offset the increased backlash, friction, and inertia resulting from the gear drives. This results in the first remotely maintainable force-reflecting servomanipulator in the world.

  4. High order hybrid numerical simulations of two dimensional detonation waves

    NASA Technical Reports Server (NTRS)

    Cai, Wei

    1993-01-01

    In order to study multi-dimensional unstable detonation waves, a high order numerical scheme suitable for calculating the detailed transverse wave structures of multidimensional detonation waves was developed. The numerical algorithm uses a multi-domain approach so different numerical techniques can be applied for different components of detonation waves. The detonation waves are assumed to undergo an irreversible, unimolecular reaction A yields B. Several cases of unstable two dimensional detonation waves are simulated and detailed transverse wave interactions are documented. The numerical results show the importance of resolving the detonation front without excessive numerical viscosity in order to obtain the correct cellular patterns.

  5. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem. PMID:15355604

  6. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem.

  7. Simplified method for numerical modeling of fiber lasers.

    PubMed

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

    2014-12-29

    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  8. Numerical simulation of shrouded propellers

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.

    1991-01-01

    A numerical model was developed for the evaluation of the performance characteristics of a shrouded propeller. Using this model, a computational study was carried out to investigate the feasibility of improving the aerodynamic performance of a propeller by encasing it in a shroud. The propeller blade was modeled by a segmented bound vortex positioned along the span of the blade at its quarter-chord-line. The shroud was modeled by a number of discrete vortex rings. Due to the mutual dependence of shroud and propeller vortex strengths and the propeller vortex wake an iterative scheme was employed. Three shroud configurations were considered: a cylindrical and two conical shrouds. The computed performance of the shrouded propeller was compared with that of a free propeller of identical propeller geometry. The numerical results indicated that the cylindrical shroud outperformed the conical shroud configurations for the cases considered. Furthermore, when compared to the free propeller performance, the cylindrical shroud showed a considerable performance enhancement over the free propeller. However, the improvements were found to decrease with an increase in the advance ratio and to virtually diminish at advance ratios of about 2.5.

  9. Randomized approximate nearest neighbors algorithm.

    PubMed

    Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir

    2011-09-20

    We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.

  10. SEOM's Sentinel-3/OLCI' project CAWA: advanced GRASP aerosol retrieval

    NASA Astrophysics Data System (ADS)

    Dubovik, Oleg; litvinov, Pavel; Huang, Xin; Aspetsberger, Michael; Fuertes, David; Brockmann, Carsten; Fischer, Jürgen; Bojkov, Bojan

    2016-04-01

    The CAWA "Advanced Clouds, Aerosols and WAter vapour products for Sentinel-3/OLCI" ESA-SEOM project aims on the development of advanced atmospheric retrieval algorithms for the Sentinel-3/OLCI mission, and is prepared using Envisat/MERIS and Aqua/MODIS datasets. This presentation discusses mainly CAWA aerosol product developments and results. CAWA aerosol retrieval uses recently developed GRASP algorithm (Generalized Retrieval of Aerosol and Surface Properties) algorithm described by Dubovik et al. (2014). GRASP derives extended set of atmospheric parameters using multi-pixel concept - a simultaneous fitting of a large group of pixels under additional a priori constraints limiting the time variability of surface properties and spatial variability of aerosol properties. Over land GRASP simultaneously retrieves properties of both aerosol and underlying surface even over bright surfaces. GRAPS doesn't use traditional look-up-tables and performs retrieval as search in continuous space of solution. All radiative transfer calculations are performed as part of the retrieval. The results of comprehensive sensitivity tests, as well as results obtained from real Envisat/MERIS data will be presented. The tests analyze various aspects of aerosol and surface reflectance retrieval accuracy. In addition, the possibilities of retrieval improvement by means of implementing synergetic inversion of a combination of OLCI data with observations by SLSTR are explored. Both the results of numerical tests, as well as the results of processing several years of Envisat/MERIS data illustrate demonstrate reliable retrieval of AOD (Aerosol Optical Depth) and surface BRDF. Observed retrieval issues and advancements will be discussed. For example, for some situations we illustrate possibilities of retrieving aerosol absorption - property that hardly accessible from satellite observations with no multi-angular and polarimetric capabilities.

  11. Parallel projected variable metric algorithms for unconstrained optimization

    NASA Technical Reports Server (NTRS)

    Freeman, T. L.

    1989-01-01

    The parallel variable metric optimization algorithms of Straeter (1973) and van Laarhoven (1985) are reviewed, and the possible drawbacks of the algorithms are noted. By including Davidon (1975) projections in the variable metric updating, researchers can generalize Straeter's algorithm to a family of parallel projected variable metric algorithms which do not suffer the above drawbacks and which retain quadratic termination. Finally researchers consider the numerical performance of one member of the family on several standard example problems and illustrate how the choice of the displacement vectors affects the performance of the algorithm.

  12. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  13. Intra-and-Inter Species Biomass Prediction in a Plantation Forest: Testing the Utility of High Spatial Resolution Spaceborne Multispectral RapidEye Sensor and Advanced Machine Learning Algorithms

    PubMed Central

    Dube, Timothy; Mutanga, Onisimo; Adam, Elhadi; Ismail, Riyad

    2014-01-01

    The quantification of aboveground biomass using remote sensing is critical for better understanding the role of forests in carbon sequestration and for informed sustainable management. Although remote sensing techniques have been proven useful in assessing forest biomass in general, more is required to investigate their capabilities in predicting intra-and-inter species biomass which are mainly characterised by non-linear relationships. In this study, we tested two machine learning algorithms, Stochastic Gradient Boosting (SGB) and Random Forest (RF) regression trees to predict intra-and-inter species biomass using high resolution RapidEye reflectance bands as well as the derived vegetation indices in a commercial plantation. The results showed that the SGB algorithm yielded the best performance for intra-and-inter species biomass prediction; using all the predictor variables as well as based on the most important selected variables. For example using the most important variables the algorithm produced an R2 of 0.80 and RMSE of 16.93 t·ha−1 for E. grandis; R2 of 0.79, RMSE of 17.27 t·ha−1 for P. taeda and R2 of 0.61, RMSE of 43.39 t·ha−1 for the combined species data sets. Comparatively, RF yielded plausible results only for E. dunii (R2 of 0.79; RMSE of 7.18 t·ha−1). We demonstrated that although the two statistical methods were able to predict biomass accurately, RF produced weaker results as compared to SGB when applied to combined species dataset. The result underscores the relevance of stochastic models in predicting biomass drawn from different species and genera using the new generation high resolution RapidEye sensor with strategically positioned bands. PMID:25140631

  14. Advanced Triangulation Displacement Sensors

    NASA Technical Reports Server (NTRS)

    Poteet, Wade M.; Cauthen, Harold K.

    1996-01-01

    Advanced optoelectronic triangulation displacement sensors undergoing development. Highly miniaturized, more stable, more accurate, and relatively easy to use. Incorporate wideband electronic circuits suitable for real-time monitoring and control of displacements. Measurements expected to be accurate to within nanometers. In principle, sensors mass-produced at relatively low unit cost. Potential applications numerous. Possible industrial application in measuring runout of rotating shaft or other moving part during fabrication in "zero-defect" manufacturing system, in which measured runout automatically corrected.

  15. A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media

    SciTech Connect

    Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.

    2008-04-01

    In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.

  16. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  17. Advances in speech processing

    NASA Astrophysics Data System (ADS)

    Ince, A. Nejat

    1992-10-01

    The field of speech processing is undergoing a rapid growth in terms of both performance and applications and this is fueled by the advances being made in the areas of microelectronics, computation, and algorithm design. The use of voice for civil and military communications is discussed considering advantages and disadvantages including the effects of environmental factors such as acoustic and electrical noise and interference and propagation. The structure of the existing NATO communications network and the evolving Integrated Services Digital Network (ISDN) concept are briefly reviewed to show how they meet the present and future requirements. The paper then deals with the fundamental subject of speech coding and compression. Recent advances in techniques and algorithms for speech coding now permit high quality voice reproduction at remarkably low bit rates. The subject of speech synthesis is next treated where the principle objective is to produce natural quality synthetic speech from unrestricted text input. Speech recognition where the ultimate objective is to produce a machine which would understand conversational speech with unrestricted vocabulary, from essentially any talker, is discussed. Algorithms for speech recognition can be characterized broadly as pattern recognition approaches and acoustic phonetic approaches. To date, the greatest degree of success in speech recognition has been obtained using pattern recognition paradigms. It is for this reason that the paper is concerned primarily with this technique.

  18. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  19. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  20. Development of comprehensive numerical schemes for predicting evaporating gas-droplets flow processes of a liquid-fueled combustor

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1990-01-01

    An existing Computational Fluid Dynamics code for simulating complex turbulent flows inside a liquid rocket combustion chamber was validated and further developed. The Advanced Rocket Injector/Combustor Code (ARICC) is simplified and validated against benchmark flow situations for laminar and turbulent flows. The numerical method used in ARICC Code is re-examined for incompressible flow calculations. For turbulent flows, both the subgrid and the two equation k-epsilon turbulence models are studied. Cases tested include idealized Burger's equation in complex geometries and boundaries, a laminar pipe flow, a high Reynolds number turbulent flow, and a confined coaxial jet with recirculations. The accuracy of the algorithm is examined by comparing the numerical results with the analytical solutions as well as experimented data with different grid sizes.

  1. Numerical solution of multi-dimensional compressible reactive flow using a parallel wavelet adaptive multi-resolution method

    NASA Astrophysics Data System (ADS)

    Grenga, Temistocle

    The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.

  2. A new SPECT reconstruction algorithm based on the Novikov explicit inversion formula

    NASA Astrophysics Data System (ADS)

    Kunyansky, Leonid A.

    2001-04-01

    We present a new reconstruction algorithm for single-photon emission computed tomography. The algorithm is based on the Novikov explicit inversion formula for the attenuated Radon transform with non-uniform attenuation. Our reconstruction technique can be viewed as a generalization of both the filtered backprojection algorithm and the Tretiak-Metz algorithm. We test the performance of the present algorithm in a variety of numerical experiments. Our numerical examples show that the algorithm is capable of accurate image reconstruction even in the case of strongly non-uniform attenuation coefficient, similar to that occurring in a human thorax.

  3. Numerical linear algebra in data mining

    NASA Astrophysics Data System (ADS)

    Eldén, Lars

    Ideas and algorithms from numerical linear algebra are important in several areas of data mining. We give an overview of linear algebra methods in text mining (information retrieval), pattern recognition (classification of handwritten digits), and PageRank computations for web search engines. The emphasis is on rank reduction as a method of extracting information from a data matrix, low-rank approximation of matrices using the singular value decomposition and clustering, and on eigenvalue methods for network analysis.

  4. Numerical linear algebra for reconstruction inverse problems

    NASA Astrophysics Data System (ADS)

    Nachaoui, Abdeljalil

    2004-01-01

    Our goal in this paper is to discuss various issues we have encountered in trying to find and implement efficient solvers for a boundary integral equation (BIE) formulation of an iterative method for solving a reconstruction problem. We survey some methods from numerical linear algebra, which are relevant for the solution of this class of inverse problems. We motivate the use of our constructing algorithm, discuss its implementation and mention the use of preconditioned Krylov methods.

  5. Numerical simulation of in situ bioremediation

    SciTech Connect

    Travis, B.J.

    1998-12-31

    Models that couple subsurface flow and transport with microbial processes are an important tool for assessing the effectiveness of bioremediation in field applications. A numerical algorithm is described that differs from previous in situ bioremediation models in that it includes: both vadose and groundwater zones, unsteady air and water flow, limited nutrients and airborne nutrients, toxicity, cometabolic kinetics, kinetic sorption, subgridscale averaging, pore clogging and protozoan grazing.

  6. Numerical simulation of droplet impact on interfaces

    NASA Astrophysics Data System (ADS)

    Kahouadji, Lyes; Che, Zhizhao; Matar, Omar; Shin, Seungwon; Chergui, Jalel; Juric, Damir

    2015-11-01

    Simulations of three-dimensional droplet impact on interfaces are carried out using BLUE, a massively-parallel code based on a hybrid Front-Tracking/Level-Set algorithm for Lagrangian tracking of arbitrarily deformable phase interfaces. High resolution numerical results show fine details and features of droplet ejection, crown formation and rim instability observed under similar experimental conditions. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

  7. Numerical Simulations of Ion Cloud Dynamics

    NASA Astrophysics Data System (ADS)

    Sillitoe, Nicolas; Hilico, Laurent

    We explain how to perform accurate numerical simulations of ion cloud dynamics by discussing the relevant orders of magnitude of the characteristic times and frequencies involved in the problem and the computer requirement with respect to the ion cloud size. We then discuss integration algorithms and Coulomb force parallelization. We finally explain how to take into account collisions, cooling laser interaction and chemical reactions in a Monte Carlo approach and discuss how to use random number generators to that end.

  8. Numerical Modeling of Nanoelectronic Devices

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Oyafuso, Fabiano; Bowen, R. Chris; Boykin, Timothy

    2003-01-01

    Nanoelectronic Modeling 3-D (NEMO 3-D) is a computer program for numerical modeling of the electronic structure properties of a semiconductor device that is embodied in a crystal containing as many as 16 million atoms in an arbitrary configuration and that has overall dimensions of the order of tens of nanometers. The underlying mathematical model represents the quantummechanical behavior of the device resolved to the atomistic level of granularity. The system of electrons in the device is represented by a sparse Hamiltonian matrix that contains hundreds of millions of terms. NEMO 3-D solves the matrix equation on a Beowulf-class cluster computer, by use of a parallel-processing matrix vector multiplication algorithm coupled to a Lanczos and/or Rayleigh-Ritz algorithm that solves for eigenvalues. In a recent update of NEMO 3-D, a new strain treatment, parameterized for bulk material properties of GaAs and InAs, was developed for two tight-binding submodels. The utility of the NEMO 3-D was demonstrated in an atomistic analysis of the effects of disorder in alloys and, in particular, in bulk In(x)Ga(l-x)As and in In0.6Ga0.4As quantum dots.

  9. Soil moisture and temperature algorithms and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  10. OpenAD : algorithm implementation user guide.

    SciTech Connect

    Utke, J.

    2004-05-13

    Research in automatic differentiation has led to a number of tools that implement various approaches and algorithms for the most important programming languages. While all these tools have the same mathematical underpinnings, the actual implementations have little in common and mostly are specialized for a particular programming language, compiler internal representation, or purpose. This specialization does not promote an open test bed for experimentation with new algorithms that arise from exploiting structural properties of numerical codes in a source transformation context. OpenAD is being designed to fill this need by providing a framework that allows for relative ease in the implementation of algorithms that operate on a representation of the numerical kernel of a program. Language independence is achieved by using an intermediate XML format and the abstraction of common compiler analyses in Open-Analysis. The intermediate format is mapped to concrete programming languages via two front/back end combinations. The design allows for reuse and combination of already implemented algorithms. We describe the set of algorithms and basic functionality currently implemented in OpenAD and explain the necessary steps to add a new algorithm to the framework.

  11. Long-Time Numerical Integration of the Three-Dimensional Wave Equation in the Vicinity of a Moving Source

    NASA Technical Reports Server (NTRS)

    Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.

    1999-01-01

    We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.

  12. Simple algorithm for computing the geometric measure of entanglement

    SciTech Connect

    Streltsov, Alexander; Kampermann, Hermann; Bruss, Dagmar

    2011-08-15

    We present an easy implementable algorithm for approximating the geometric measure of entanglement from above. The algorithm can be applied to any multipartite mixed state. It involves only the solution of an eigenproblem and finding a singular value decomposition; no further numerical techniques are needed. To provide examples, the algorithm was applied to the isotropic states of three qubits and the three-qubit XX model with external magnetic field.

  13. Guidance algorithms for a free-flying space robot

    NASA Technical Reports Server (NTRS)

    Brindle, A. F.; Viggh, H. E. M.; Albert, J. H.

    1989-01-01

    Robotics is a promising technology for assembly, servicing, and maintenance of platforms in space. Several aspects of planning and guidance for telesupervised and fully autonomous robotic servicers are investigated. Guidance algorithms for proximity operation of a free flyer are described. Numeric trajectory optimization is combined with artificial intelligence based obstacle avoidance. An initial algorithm and the results of its simulating platform servicing scenario are discussed. A second algorithm experiment is then proposed.

  14. Cumulative Reconstructor: fast wavefront reconstruction algorithm for Extremely Large Telescopes.

    PubMed

    Rosensteiner, Matthias

    2011-10-01

    The Cumulative Reconstructor (CuRe) is a new direct reconstructor for an optical wavefront from Shack-Hartmann wavefront sensor measurements. In this paper, the algorithm is adapted to realistic telescope geometries and the transition from modified Hudgin to Fried geometry is discussed. After a discussion of the noise propagation, we analyze the complexity of the algorithm. Our numerical tests confirm that the algorithm is very fast and accurate and can therefore be used for adaptive optics systems of Extremely Large Telescopes.

  15. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  16. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  17. Numerical Stability and Accuracy of Temporally Coupled Multi-Physics Modules in Wind-Turbine CAE Tools

    SciTech Connect

    Gasmi, A.; Sprague, M. A.; Jonkman, J. M.; Jones, W. B.

    2013-02-01

    In this paper we examine the stability and accuracy of numerical algorithms for coupling time-dependent multi-physics modules relevant to computer-aided engineering (CAE) of wind turbines. This work is motivated by an in-progress major revision of FAST, the National Renewable Energy Laboratory's (NREL's) premier aero-elastic CAE simulation tool. We employ two simple examples as test systems, while algorithm descriptions are kept general. Coupled-system governing equations are framed in monolithic and partitioned representations as differential-algebraic equations. Explicit and implicit loose partition coupling is examined. In explicit coupling, partitions are advanced in time from known information. In implicit coupling, there is dependence on other-partition data at the next time step; coupling is accomplished through a predictor-corrector (PC) approach. Numerical time integration of coupled ordinary-differential equations (ODEs) is accomplished with one of three, fourth-order fixed-time-increment methods: Runge-Kutta (RK), Adams-Bashforth (AB), and Adams-Bashforth-Moulton (ABM). Through numerical experiments it is shown that explicit coupling can be dramatically less stable and less accurate than simulations performed with the monolithic system. However, PC implicit coupling restored stability and fourth-order accuracy for ABM; only second-order accuracy was achieved with RK integration. For systems without constraints, explicit time integration with AB and explicit loose coupling exhibited desired accuracy and stability.

  18. Advance care directives

    MedlinePlus

    ... advance directive; Do-not-resuscitate - advance directive; Durable power of attorney - advance care directive; POA - advance care directive; Health care agent - advance care directive; Health care proxy - ...

  19. Quality control algorithms for rainfall measurements

    NASA Astrophysics Data System (ADS)

    Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs

    2005-09-01

    One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).

  20. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.