Science.gov

Sample records for advanced numerical algorithms

  1. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    SciTech Connect

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges. In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.

  2. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    DOE PAGESBeta

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less

  3. Advanced software algorithms

    SciTech Connect

    Berry, K.; Dayton, S.

    1996-10-28

    Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.

  4. A Numerical Instability in an ADI Algorithm for Gyrokinetics

    SciTech Connect

    E.A. Belli; G.W. Hammett

    2004-12-17

    We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v{sub {parallel}} {partial_derivative}/{partial_derivative}z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms.

  5. Trees, bialgebras and intrinsic numerical algorithms

    NASA Technical Reports Server (NTRS)

    Crouch, Peter; Grossman, Robert; Larson, Richard

    1990-01-01

    Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.

  6. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  7. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  8. Advanced incomplete factorization algorithms for Stiltijes matrices

    SciTech Connect

    Il`in, V.P.

    1996-12-31

    The modern numerical methods for solving the linear algebraic systems Au = f with high order sparse matrices A, which arise in grid approximations of multidimensional boundary value problems, are based mainly on accelerated iterative processes with easily invertible preconditioning matrices presented in the form of approximate (incomplete) factorization of the original matrix A. We consider some recent algorithmic approaches, theoretical foundations, experimental data and open questions for incomplete factorization of Stiltijes matrices which are {open_quotes}the best{close_quotes} ones in the sense that they have the most advanced results. Special attention is given to solving the elliptic differential equations with strongly variable coefficients, singular perturbated diffusion-convection and parabolic equations.

  9. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  10. An advanced dispatch simulator with advanced dispatch algorithm

    SciTech Connect

    Kafka, R.J. ); Fink, L.H. ); Balu, N.J. ); Crim, H.G. )

    1989-01-01

    This paper reports on an interactive automatic generation control (AGC) simulator. Improved and timely information regarding fossil fired plant performance is potentially useful in the economic dispatch of system generating units. Commonly used economic dispatch algorithms are not able to take full advantage of this information. The dispatch simulator was developed to test and compare economic dispatch algorithms which might be able to show improvement over standard economic dispatch algorithms if accurate unit information were available. This dispatch simulator offers substantial improvements over previously available simulators. In addition, it contains an advanced dispatch algorithm which shows control and performance advantages over traditional dispatch algorithms for both plants and electric systems.

  11. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  12. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  13. NAS Applications and Advanced Algorithms

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Biswas, Rupak; VanDerWijngaart, Rob; Kutler, Paul (Technical Monitor)

    1997-01-01

    This paper examines the applications most commonly run on the supercomputers at the Numerical Aerospace Simulation (NAS) facility. It analyzes the extent to which such applications are fundamentally oriented to vector computers, and whether or not they can be efficiently implemented on hierarchical memory machines, such as systems with cache memories and highly parallel, distributed memory systems.

  14. Advances in Numerical Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1997-01-01

    Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.

  15. Advanced spectral signature discrimination algorithm

    NASA Astrophysics Data System (ADS)

    Chakravarty, Sumit; Cao, Wenjie; Samat, Alim

    2013-05-01

    This paper presents a novel approach to the task of hyperspectral signature analysis. Hyperspectral signature analysis has been studied a lot in literature and there has been a lot of different algorithms developed which endeavors to discriminate between hyperspectral signatures. There are many approaches for performing the task of hyperspectral signature analysis. Binary coding approaches like SPAM and SFBC use basic statistical thresholding operations to binarize a signature which are then compared using Hamming distance. This framework has been extended to techniques like SDFC wherein a set of primate structures are used to characterize local variations in a signature together with the overall statistical measures like mean. As we see such structures harness only local variations and do not exploit any covariation of spectrally distinct parts of the signature. The approach of this research is to harvest such information by the use of a technique similar to circular convolution. In the approach we consider the signature as cyclic by appending the two ends of it. We then create two copies of the spectral signature. These three signatures can be placed next to each other like the rotating discs of a combination lock. We then find local structures at different circular shifts between the three cyclic spectral signatures. Texture features like in SDFC can be used to study the local structural variation for each circular shift. We can then create different measure by creating histogram from the shifts and thereafter using different techniques for information extraction from the histograms. Depending on the technique used different variant of the proposed algorithm are obtained. Experiments using the proposed technique show the viability of the proposed methods and their performances as compared to current binary signature coding techniques.

  16. Advanced algorithms for information science

    SciTech Connect

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.

  17. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1982-01-01

    Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.

  18. Numerical Algorithm for Delta of Asian Option

    PubMed Central

    Zhang, Boxiang; Yu, Yang; Wang, Weiguo

    2015-01-01

    We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271

  19. Numerical Algorithm for Delta of Asian Option.

    PubMed

    Zhang, Boxiang; Yu, Yang; Wang, Weiguo

    2015-01-01

    We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271

  20. Carbon export algorithm advancements in models

    NASA Astrophysics Data System (ADS)

    Çağlar Yumruktepe, Veli; Salihoğlu, Barış

    2015-04-01

    The rate at which anthropogenic CO2 is absorbed by the oceans remains a critical question under investigation by climate researchers. Construction of a complete carbon budget, requires better understanding of air-sea exchanges and the processes controlling the vertical and horizontal transport of carbon in the ocean, particularly the biological carbon pump. Improved parameterization of carbon sequestration within ecosystem models is vital to better understand and predict changes in the global carbon cycle. Due to the complexity of processes controlling particle aggregation, sinking and decomposition, existing ecosystem models necessarily parameterize carbon sequestration using simple algorithms. Development of improved algorithms describing carbon export and sequestration, suitable for inclusion in numerical models is an ongoing work. Existing unique algorithms used in the state-of-the art ecosystem models and new experimental results obtained from mesocosm experiments and open ocean observations have been inserted into a common 1D pelagic ecosystem model for testing purposes. The model was implemented to the timeseries stations in the North Atlantic (BATS, PAP and ESTOC) and were evaluated with datasets of carbon export. Targetted topics of algorithms were PFT functional types, grazing and vertical movement of zooplankton, and remineralization, aggregation and ballasting dynamics of organic matter. Ultimately it is intended to feed improved algorithms to the 3D modelling community, for inclusion in coupled numerical models.

  1. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  2. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1981-01-01

    Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.

  3. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  4. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  5. Brush seal numerical simulation: Concepts and advances

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-01-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  6. Brush seal numerical simulation: Concepts and advances

    NASA Astrophysics Data System (ADS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-07-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  7. Advances in fracture algorithm development in GRIM

    NASA Astrophysics Data System (ADS)

    Cullis, I.; Church, P.; Greenwood, P.; Huntington-Thresher, W.; Reynolds, M.

    2003-09-01

    The numerical treatment of fracture processes has long been a major challenge in any hydrocode, but has been particularly acute in Eulerian Hydrocodes. This is due to the difficulties in establishing a consistent process for treating failure and the post failure treatment, which is complicated by advection, mixed cell and interface issues, particularly post failure. This alone increase the complexity of incorporating and validating a failure model compared to a Lagrange hydrocode, where the numerical treatment is much simpler. This paper outlines recent significant progress in the incorporation of fracture models in GRIM and the advection of damage across cell boundaries within the mesh. This has allowed a much more robust treatment of fracture in an Eulerian frame of reference and has greatly expanded the scope of tractable dynamic fracture scenarios. The progress has been possible due to a careful integration of the fracture algorithm within the numerical integration scheme to maintain a consistent representation of the physics. The paper describes various applications, which demonstrate the robustness and efficiency of the scheme and highlight some of the future challenges.

  8. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  9. Multiresolution representation and numerical algorithms: A brief review

    NASA Technical Reports Server (NTRS)

    Harten, Amiram

    1994-01-01

    In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.

  10. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  11. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  12. Experimentally constructing finite difference algorithms in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew; Neilsen, David; Matzner, Richard

    2002-04-01

    Computational studies of gravitational waves require numerical algorithms with long-term stability (necessary for convergence). However, constructing stable finite difference algorithms (FDA) for the ADM formulation of the Einstein equations, especially in multiple dimensions, has proven difficult. Most FDA's are constructed using rules of thumb gained from experience with simple model equations. To search for FDA's with improved stability, we adopt a brute-force approach, where we systematically test thousands of numerical schemes. We sort the spatial derivatives of the Einstein equations into groups, and parameterize each group by finite difference type (centered or upwind) and order. Furthermore, terms proportional to the constraints are added to the evolution equations with additional parameters. A spherically symmetric, excised Schwarzschild black hole (one dimension) and linearized waves in multiple dimensions are used as model systems to evaluate the different numerical schemes.

  13. Advanced numerics for multi-dimensional fluid flow calculations

    NASA Technical Reports Server (NTRS)

    Vanka, S. P.

    1984-01-01

    In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.

  14. Determining the Numerical Stability of Quantum Chemistry Algorithms.

    PubMed

    Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim

    2011-08-01

    We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided. PMID:26606614

  15. An algorithm for the numerical solution of linear differential games

    SciTech Connect

    Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.

  16. A numerical algorithm for magnetohydrodynamics of ablated materials.

    PubMed

    Lu, Tianshi; Du, Jian; Samulyak, Roman

    2008-07-01

    A numerical algorithm for the simulation of magnetohydrodynamics in partially ionized ablated material is described. For the hydro part, the hyperbolic conservation laws with electromagnetic terms is solved using techniques developed for free surface flows; for the electromagnetic part, the electrostatic approximation is applied and an elliptic equation for electric potential is solved. The algorithm has been implemented in the frame of front tracking, which explicitly tracks geometrically complex evolving interfaces. An elliptic solver based on the embedded boundary method were implemented for both two- and three-dimensional simulations. A surface model on the interface between the solid target and the ablated vapor has also been developed as well as a numerical model for the equation of state which accounts for atomic processes in the ablated material. The code has been applied to simulations of the pellet ablation in a magnetically confined plasma and the laser-ablated plasma plume expansion in magnetic fields. PMID:19051925

  17. Algorithms for the Fractional Calculus: A Selection of Numerical Methods

    NASA Technical Reports Server (NTRS)

    Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.

    2003-01-01

    Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.

  18. Advances in numerical and applied mathematics

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr. (Editor); Hussaini, M. Y. (Editor)

    1986-01-01

    This collection of papers covers some recent developments in numerical analysis and computational fluid dynamics. Some of these studies are of a fundamental nature. They address basic issues such as intermediate boundary conditions for approximate factorization schemes, existence and uniqueness of steady states for time dependent problems, and pitfalls of implicit time stepping. The other studies deal with modern numerical methods such as total variation diminishing schemes, higher order variants of vortex and particle methods, spectral multidomain techniques, and front tracking techniques. There is also a paper on adaptive grids. The fluid dynamics papers treat the classical problems of imcompressible flows in helically coiled pipes, vortex breakdown, and transonic flows.

  19. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  20. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.

    2016-01-01

    Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

  1. Advanced Imaging Algorithms for Radiation Imaging Systems

    SciTech Connect

    Marleau, Peter

    2015-10-01

    The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.

  2. Advanced CHP Control Algorithms: Scope Specification

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  3. Computing Algorithms for Nuffield Advanced Physics.

    ERIC Educational Resources Information Center

    Summers, M. K.

    1978-01-01

    Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)

  4. Advanced Numerical Model for Irradiated Concrete

    SciTech Connect

    Giorla, Alain B.

    2015-03-01

    In this report, we establish a numerical model for concrete exposed to irradiation to address these three critical points. The model accounts for creep in the cement paste and its coupling with damage, temperature and relative humidity. The shift in failure mode with the loading rate is also properly represented. The numerical model for creep has been validated and calibrated against different experiments in the literature [Wittmann, 1970, Le Roy, 1995]. Results from a simplified model are shown to showcase the ability of numerical homogenization to simulate irradiation effects in concrete. In future works, the complete model will be applied to the analysis of the irradiation experiments of Elleuch et al. [1972] and Kelly et al. [1969]. This requires a careful examination of the experimental environmental conditions as in both cases certain critical information are missing, including the relative humidity history. A sensitivity analysis will be conducted to provide lower and upper bounds of the concrete expansion under irradiation, and check if the scatter in the simulated results matches the one found in experiments. The numerical and experimental results will be compared in terms of expansion and loss of mechanical stiffness and strength. Both effects should be captured accordingly by the model to validate it. Once the model has been validated on these two experiments, it can be applied to simulate concrete from nuclear power plants. To do so, the materials used in these concrete must be as well characterized as possible. The main parameters required are the mechanical properties of each constituent in the concrete (aggregates, cement paste), namely the elastic modulus, the creep properties, the tensile and compressive strength, the thermal expansion coefficient, and the drying shrinkage. These can be either measured experimentally, estimated from the initial composition in the case of cement paste, or back-calculated from mechanical tests on concrete. If some

  5. Algorithm-Based Fault Tolerance for Numerical Subroutines

    NASA Technical Reports Server (NTRS)

    Tumon, Michael; Granat, Robert; Lou, John

    2007-01-01

    A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

  6. Advanced in turbulence physics and modeling by direct numerical simulations

    NASA Technical Reports Server (NTRS)

    Reynolds, W. C.

    1987-01-01

    The advent of direct numerical simulations of turbulence has opened avenues for research on turbulence physics and turbulence modeling. Direct numerical simulation provides values for anything that the scientist or modeler would like to know about the flow. An overview of some recent advances in the physical understanding of turbulence and in turbulence modeling obtained through such simulations is presented.

  7. Numerical Forming Simulations and Optimisation in Advanced Materials

    NASA Astrophysics Data System (ADS)

    Huétink, J.; van den Boogaard, A. H.; Geijselears, H. J. M.; Meinders, T.

    2007-05-01

    With the introduction of new materials as high strength steels, metastable steels and fibre reinforced composites, the need for advanced physically valid constitutive models arises. In finite deformation problems constitutive relations are commonly formulated in terms the Cauchy stress as a function of the elastic Finger tensor and an objective rate of the Cauchy stress as a function of the rate of deformation tensor. For isotropic materials models this is rather straightforward, but for anisotropic material models, including elastic anisotropy as well as plastic anisotropy, this may lead to confusing formulations. It will be shown that it is more convenient to define the constitutive relations in terms of invariant tensors referred to the deformed metric. Experimental results are presented that show new combinations of strain rate and strain path sensitivity. An adaptive through- thickness integration scheme for plate elements is developed, which improves the accuracy of spring back prediction at minimal costs. A procedure is described to automatically compensate the CAD tool shape numerically to obtain the desired product shape. Forming processes need to be optimized for cost saving and product improvement. Until recently, a trial-and-error process in the factory primarily did this optimization. An optimisation strategy is proposed that assists an engineer to model an optimization problem that suits his needs, including an efficient algorithm for solving the problem.

  8. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  9. An advanced dispatch simulator with advanced dispatch algorithm

    SciTech Connect

    Kafka, R.J.; Crim, H.G. Jr. ); Fink, L.H. ); Balu, N.J. . Electrical Systems Div.)

    1989-10-01

    This article describes the development of an automatic generation control algorithm, which is capable of using accurate real-time unit data and has control performance advantages over existing algorithms. Utilities use automatic generation control to match total generation and total laod at minimum cost. Since it is impractical to measure total load directly, it is determined from total generation for a control area, the total tie-line flow error, and a component proportional to the frequency error. One part of the AGC system then assigns this total generation requirement to all the generators in the control area in an economic manner by using an economic dispatch algorithm. Another part of the AGC system keeps track of each generator and attempts to correct individual unit errors and total system errors that can be caused by unit response problems, normal changes in system load, metering errors, and system disturbances.

  10. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  11. Advances on image interpolation based on ant colony algorithm.

    PubMed

    Rukundo, Olivier; Cao, Hanqiang

    2016-01-01

    This paper presents an advance on image interpolation based on ant colony algorithm (AACA) for high resolution image scaling. The difference between the proposed algorithm and the previously proposed optimization of bilinear interpolation based on ant colony algorithm (OBACA) is that AACA uses global weighting, whereas OBACA uses local weighting scheme. The strength of the proposed global weighting of AACA algorithm depends on employing solely the pheromone matrix information present on any group of four adjacent pixels to decide which case deserves a maximum global weight value or not. Experimental results are further provided to show the higher performance of the proposed AACA algorithm with reference to the algorithms mentioned in this paper. PMID:27047729

  12. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  13. New Advances In Multiphase Flow Numerical Modelling Using A General Domain Decomposition and Non-orthogonal Collocated Finite Volume Algorithm: Application To Industrial Fluid Catalytical Cracking Process and Large Scale Geophysical Fluids.

    NASA Astrophysics Data System (ADS)

    Martin, R.; Gonzalez Ortiz, A.

    In the industry as well as in the geophysical community, multiphase flows are mod- elled using a finite volume approach and a multicorrector algorithm in time in order to determine implicitly the pressures, velocities and volume fractions for each phase. Pressures, and velocities are generally determined at mid-half mesh step from each other following the staggered grid approach. This ensures stability and prevents os- cillations in pressure. It allows to treat almost all the Reynolds number ranges for all speeds and viscosities. The disadvantages appear when we want to treat more complex geometries or if a generalized curvilinear formulation of the conservation equations is considered. Too many interpolations have to be done and accuracy is then lost. In order to overcome these problems, we use here a similar algorithm in time and a Rhie and Chow interpolation (1983) of the collocated variables and essentially the velocities at the interface. The Rhie and Chow interpolation of the velocities at the finite volume interfaces allows to have no oscillatons of the pressure without checkerboard effects and to stabilize all the algorithm. In a first predictor step, fluxes at the interfaces of the finite volumes are then computed using 2nd and 3rd order shock capturing schemes of MUSCL/TVD or Van Leer type, and the orthogonal stress components are treated implicitly while cross viscous/diffusion terms are treated explicitly. A pentadiagonal system in 2D or a septadiagonal in 3D must be solve but here we have chosen to solve 3 tridiagonal linear systems (the so called Alternate Direction Implicit algorithm), one in each spatial direction, to reduce the cost of computation. Then a multi-correction of interpolated velocities, pressures and volumic fractions of each phase are done in the cartesian frame or the deformed local curvilinear coordinate system till convergence and mass conservation. At the end the energy conservation equations are solved. In all this process the

  14. Advances in Multi-Sensor Data Fusion: Algorithms and Applications

    PubMed Central

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of “algorithm fusion” methods; (3) Establishment of an automatic quality assessment scheme. PMID:22408479

  15. Deinterlacing algorithm with an advanced non-local means filter

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Jeon, Gwanggil; Jeong, Jechang

    2012-04-01

    The authors introduce an efficient intra-field deinterlacing algorithm using an advanced non-local means filter. The non-local means (NLM) method has received considerable attention due to its high performance and simplicity. The NLM method adaptively obtains the missing pixel by the weighted average of the gray values of all pixels within the image, and then automatically eliminates unrelated neighborhoods from the weighted average. However, spatial location distance is another important issue for the deinterlacing method. Therefore we introduce an advanced NLM (ANLM) filter while consider neighborhood similarity and patch distance. Moreover, the search region of the conventional NLM is the whole image, while, the ANLM can just utilize the limited search region and achieve good performance and high efficiency. When compared with existing deinterlacing algorithms, the proposed algorithm improves the peak signal-to-noise-ratio while maintaining high efficiency.

  16. The association between symbolic and nonsymbolic numerical magnitude processing and mental versus algorithmic subtraction in adults.

    PubMed

    Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert

    2016-03-01

    There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge. PMID:26914586

  17. A fast algorithm for numerical solutions to Fortet's equation

    NASA Astrophysics Data System (ADS)

    Brumen, Gorazd

    2008-10-01

    A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.

  18. Some recent advances in the numerical solution of differential equations

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Raffaele

    2016-06-01

    The purpose of the talk is the presentation of some recent advances in the numerical solution of differential equations, with special emphasis to reaction-diffusion problems, Hamiltonian problems and ordinary differential equations with discontinuous right-hand side. As a special case, in this short paper we focus on the solution of reaction-diffusion problems by means of special purpose numerical methods particularly adapted to the problem: indeed, following a problem oriented approach, we propose a modified method of lines based on the employ of finite differences shaped on the qualitative behavior of the solutions. Constructive issues and a brief analysis are presented, together with some numerical experiments showing the effectiveness of the approach and a comparison with existing solvers.

  19. Numerical analysis of EPR spectra. 7. The simplex algorithm

    NASA Astrophysics Data System (ADS)

    Beckwith, Athelstan L. J.; Brumby, Steven

    The Simplex algorithm is well suited to the least-squares analysis of highly complex EPR spectra. The application of the algorithm to the analysis of the spectra of benzo[ a]pyrenyl-6-oxy, chloro(methoxycarbonyl)methyl, and cyano(methoxy)methyl free radicals is described.

  20. International Symposium on Computational Electronics—Physical Modeling, Mathematical Theory, and Numerical Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yiming

    2007-12-01

    This symposium is an open forum for discussion on the current trends and future directions of physical modeling, mathematical theory, and numerical algorithm in electrical and electronic engineering. The goal is for computational scientists and engineers, computer scientists, applied mathematicians, physicists, and researchers to present their recent advances and exchange experience. We welcome contributions from researchers of academia and industry. All papers to be presented in this symposium have carefully been reviewed and selected. They include semiconductor devices, circuit theory, statistical signal processing, design optimization, network design, intelligent transportation system, and wireless communication. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!

  1. Advanced defect detection algorithm using clustering in ultrasonic NDE

    NASA Astrophysics Data System (ADS)

    Gongzhang, Rui; Gachagan, Anthony

    2016-02-01

    A range of materials used in industry exhibit scattering properties which limits ultrasonic NDE. Many algorithms have been proposed to enhance defect detection ability, such as the well-known Split Spectrum Processing (SSP) technique. Scattering noise usually cannot be fully removed and the remaining noise can be easily confused with real feature signals, hence becoming artefacts during the image interpretation stage. This paper presents an advanced algorithm to further reduce the influence of artefacts remaining in A-scan data after processing using a conventional defect detection algorithm. The raw A-scan data can be acquired from either traditional single transducer or phased array configurations. The proposed algorithm uses the concept of unsupervised machine learning to cluster segmental defect signals from pre-processed A-scans into different classes. The distinction and similarity between each class and the ensemble of randomly selected noise segments can be observed by applying a classification algorithm. Each class will then be labelled as `legitimate reflector' or `artefacts' based on this observation and the expected probability of defection (PoD) and probability of false alarm (PFA) determined. To facilitate data collection and validate the proposed algorithm, a 5MHz linear array transducer is used to collect A-scans from both austenitic steel and Inconel samples. Each pulse-echo A-scan is pre-processed using SSP and the subsequent application of the proposed clustering algorithm has provided an additional reduction to PFA while maintaining PoD for both samples compared with SSP results alone.

  2. Advanced Health Management Algorithms for Crew Exploration Applications

    NASA Technical Reports Server (NTRS)

    Davidson, Matt; Stephens, John; Jones, Judit

    2005-01-01

    Achieving the goals of the President's Vision for Exploration will require new and innovative ways to achieve reliability increases of key systems and sub-systems. The most prominent approach used in current systems is to maintain hardware redundancy. This imposes constraints to the system and utilizes weight that could be used for payload for extended lunar, Martian, or other deep space missions. A technique to improve reliability while reducing the system weight and constraints is through the use of an Advanced Health Management System (AHMS). This system contains diagnostic algorithms and decision logic to mitigate or minimize the impact of system anomalies on propulsion system performance throughout the powered flight regime. The purposes of the AHMS are to increase the probability of successfully placing the vehicle into the intended orbit (Earth, Lunar, or Martian escape trajectory), increase the probability of being able to safely execute an abort after it has developed anomalous performance during launch or ascent phases of the mission, and to minimize or mitigate anomalies during the cruise portion of the mission. This is accomplished by improving the knowledge of the state of the propulsion system operation at any given turbomachinery vibration protection logic and an overall system analysis algorithm that utilizes an underlying physical model and a wide array of engine system operational parameters to detect and mitigate predefined engine anomalies. These algorithms are generic enough to be utilized on any propulsion system yet can be easily tailored to each application by changing input data and engine specific parameters. The key to the advancement of such a system is the verification of the algorithms. These algorithms will be validated through the use of a database of nominal and anomalous performance from a large propulsion system where data exists for catastrophic and noncatastrophic propulsion sytem failures.

  3. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  4. Numerical Optimization Algorithms and Software for Systems Biology

    SciTech Connect

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  5. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  6. Numerical modeling of spray combustion with an advanced VOF method

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  7. Fast Huffman encoding algorithms in MPEG-4 advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2014-11-01

    This paper addresses the optimisation problem of Huffman encoding in MPEG-4 Advanced Audio Coding stan- dard. At first, the Huffman encoding problem and the need of encoding two side info parameters scale factor and Huffman codebook are presented. Next, Two Loop Search, Maximum Noise Mask Ratio and Trellis Based algorithms of bit allocation are briefly described. Further, Huffman encoding optimisation are shown. New methods try to check and change scale factor bands as little as possible to estimate bitrate cost or its change. Finally, the complexity of old and new methods is calculated, compared and measured time of encoding is given.

  8. VIRTEX-5 Fpga Implementation of Advanced Encryption Standard Algorithm

    NASA Astrophysics Data System (ADS)

    Rais, Muhammad H.; Qasim, Syed M.

    2010-06-01

    In this paper, we present an implementation of Advanced Encryption Standard (AES) cryptographic algorithm using state-of-the-art Virtex-5 Field Programmable Gate Array (FPGA). The design is coded in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Timing simulation is performed to verify the functionality of the designed circuit. Performance evaluation is also done in terms of throughput and area. The design implemented on Virtex-5 (XC5VLX50FFG676-3) FPGA achieves a maximum throughput of 4.34 Gbps utilizing a total of 399 slices.

  9. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Voigt, Robert G.; Romine, Charles H.

    1988-01-01

    This is a bibliography on numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are also listed.

  10. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1987-01-01

    This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also.

  11. Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.

    ERIC Educational Resources Information Center

    Jacquot, Raymond G.; And Others

    1985-01-01

    Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)

  12. Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving

    2014-02-01

    The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.

  13. Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media.

    PubMed

    Stamnes, K; Tsay, S C; Wiscombe, W; Jayaweera, K

    1988-06-15

    We summarize an advanced, thoroughly documented, and quite general purpose discrete ordinate algorithm for time-independent transfer calculations in vertically inhomogeneous, nonisothermal, plane-parallel media. Atmospheric applications ranging from the UV to the radar region of the electromagnetic spectrum are possible. The physical processes included are thermal emission, scattering, absorption, and bidirectional reflection and emission at the lower boundary. The medium may be forced at the top boundary by parallel or diffuse radiation and by internal and boundary thermal sources as well. We provide a brief account of the theoretical basis as well as a discussion of the numerical implementation of the theory. The recent advances made by ourselves and our collaborators-advances in both formulation and numerical solution-are all incorporated in the algorithm. Prominent among these advances are the complete conquest of two illconditioning problems which afflicted all previous discrete ordinate implementations: (1) the computation of eigenvalues and eigenvectors and (2) the inversion of the matrix determining the constants of integration. Copies of the FORTRAN program on microcomputer diskettes are available for interested users. PMID:20531783

  14. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  15. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2004-04-26

    Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.

  16. Thrombosis modeling in intracranial aneurysms: a lattice Boltzmann numerical algorithm

    NASA Astrophysics Data System (ADS)

    Ouared, R.; Chopard, B.; Stahl, B.; Rüfenacht, D. A.; Yilmaz, H.; Courbebaisse, G.

    2008-07-01

    The lattice Boltzmann numerical method is applied to model blood flow (plasma and platelets) and clotting in intracranial aneurysms at a mesoscopic level. The dynamics of blood clotting (thrombosis) is governed by mechanical variations of shear stress near wall that influence platelets-wall interactions. Thrombosis starts and grows below a shear rate threshold, and stops above it. Within this assumption, it is possible to account qualitatively well for partial, full or no occlusion of the aneurysm, and to explain why spontaneous thrombosis is more likely to occur in giant aneurysms than in small or medium sized aneurysms.

  17. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    SciTech Connect

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  18. Thermal contact algorithms in SIERRA mechanics : mathematical background, numerical verification, and evaluation of performance.

    SciTech Connect

    Copps, Kevin D.; Carnes, Brian R.

    2008-04-01

    We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.

  19. Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning

    NASA Astrophysics Data System (ADS)

    Bradley, Ben K.

    Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and

  20. A stable and efficient numerical algorithm for unconfined aquifer analysis.

    PubMed

    Keating, Elizabeth; Zyvoloski, George

    2009-01-01

    The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well. PMID:19341374

  1. A stable and efficient numerical algorithm for unconfined aquifer analysis

    SciTech Connect

    Keating, Elizabeth; Zyvoloski, George

    2008-01-01

    The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.

  2. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  3. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGESBeta

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  4. Numerical optimization design of advanced transonic wing configurations

    NASA Technical Reports Server (NTRS)

    Cosentino, G. B.; Holst, T. L.

    1984-01-01

    A computationally efficient and versatile technique for use in the design of advanced transonic wing configurations has been developed. A reliable and fast transonic wing flow-field analysis program, TWING, has been coupled with a modified quasi-Newton method, unconstrained optimization algorithm, QNMDIF, to create a new design tool. Fully three-dimensional wing designs utilizing both specified wing pressure distributions and drag-to-lift ration minimization as design objectives are demonstrated. Because of the high computational efficiency of each of the components of the design code, in particular the vectorization of TWING and the high speed of the Cray X-MP vector computer, the computer time required for a typical wing design is reduced by approximately an order of magnitude over previous methods. In the results presented here, this computed wave drag has been used as the quantity to be optimized (minimized) with great success, yielding wing designs with nearly shock-free (zero wave drag) pressure distributions and very reasonable wing section shapes.

  5. Advanced three-dimensional Eulerian hydrodynamic algorithm development

    SciTech Connect

    Rider, W.J.; Kothe, D.B.; Mosso, S.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project is to investigate, implement, and evaluate algorithms that have high potential for improving the robustness, fidelity and accuracy of three-dimensional Eulerian hydrodynamic simulations. Eulerian computations are necessary to simulate a number of important physical phenomena ranging from the molding process for metal parts to nuclear weapons safety issues to astrophysical phenomena such as that associated with a Type 2 supernovae. A number of algorithmic issues were explored in the course of this research including interface/volume tracking, surface physics integration, high resolution integration techniques, multilevel iterative methods, multimaterial hydrodynamics and coupling radiation with hydrodynamics. This project combines core strengths of several Laboratory divisions. The project has high institutional benefit given the renewed emphasis on numerical simulations in Science-Based Stockpile Stewardship and the Accelerated Strategic Computing Initiative and LANL`s tactical goals related to high performance computing and simulation.

  6. Advanced illumination control algorithm for medical endoscopy applications

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.

    2015-05-01

    CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.

  7. Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms

    NASA Astrophysics Data System (ADS)

    Brunner, Christopher W.; Lu, Ping

    2012-09-01

    The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.

  8. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of

  9. An Advanced Manipulator For Poisson Series With Numerical Coefficients

    NASA Astrophysics Data System (ADS)

    Biscani, Francesco; Casotto, S.

    2006-06-01

    The availability of an efficient and featureful manipulator for Poisson deries with numerical coefficients is a standard need for celestial mechanicians and has arisen during our work on the analytical development of the Tide-Generating-Potential (TGP). In the harmonic expansion of the TGP the Poisson series appearing in the theories of motion of the celestial bodies are subjected to a wide set of mathematical operations, ranging from simple additions and multiplications to more sophisticated operations on Legendre polynomials and spherical harmonics with Poisson series as arguments. To perform these operations we have developed an algebraic manipulator, called Piranha, structured as an object-oriented multi-platform C++ library. Piranha handles series with real and complex coefficients, and operates with an arbitrary degree of precision. It supports advanced features such as trigonometric operations and the generation of special functions from Poisson series. Piranha is provided with a proof-of-concept, multi-platform GUI, which serves as a testbed and benchmark for the library. We describe Piranha's architecture and characteristics, what it accomplishes currently and how it will be extended in the future (e.g., to handle series with symbolic coefficients in a consistent fashion with its current design).

  10. A numerical comparison of discrete Kalman filtering algorithms - An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    An improved Kalman filter algorithm based on a modified Givens matrix triangularization technique is proposed for solving a nonstationary discrete-time linear filtering problem. The proposed U-D covariance factorization filter uses orthogonal transformation technique; measurement and time updating of the U-D factors involve separate application of Gentleman's fast square-root-free Givens rotations. Numerical stability and accuracy of the algorithm are compared with those of the conventional and stabilized Kalman filters and the Potter-Schmidt square-root filter, by applying these techniques to a realistic planetary navigation problem (orbit determination for the Saturn approach phase of the Mariner Jupiter-Saturn Mission, 1977). The new algorithm is shown to combine the numerical precision of square root filtering with the efficiency of the original Kalman algorithm.

  11. Numerical optimization algorithm for rotationally invariant multi-orbital slave-boson method

    NASA Astrophysics Data System (ADS)

    Quan, Ya-Min; Wang, Qing-wei; Liu, Da-Yong; Yu, Xiang-Long; Zou, Liang-Jian

    2015-06-01

    We develop a generalized numerical optimization algorithm for the rotationally invariant multi-orbital slave boson approach, which is applicable for arbitrary boundary constraints of high-dimensional objective function by combining several classical optimization techniques. After constructing the calculation architecture of rotationally invariant multi-orbital slave boson model, we apply this optimization algorithm to find the stable ground state and magnetic configuration of two-orbital Hubbard models. The numerical results are consistent with available solutions, confirming the correctness and accuracy of our present algorithm. Furthermore, we utilize it to explore the effects of the transverse Hund's coupling terms on metal-insulator transition, orbital selective Mott phase and magnetism. These results show the quick convergency and robust stable character of our algorithm in searching the optimized solution of strongly correlated electron systems.

  12. Seven-spot ladybird optimization: a novel and efficient metaheuristic algorithm for numerical optimization.

    PubMed

    Wang, Peng; Zhu, Zhouquan; Huang, Shuai

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879

  13. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  14. Seven-Spot Ladybird Optimization: A Novel and Efficient Metaheuristic Algorithm for Numerical Optimization

    PubMed Central

    Zhu, Zhouquan

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879

  15. Chemotactic and diffusive migration on a nonuniformly growing domain: numerical algorithm development and applications

    NASA Astrophysics Data System (ADS)

    Simpson, Matthew J.; Landman, Kerry A.; Newgreen, Donald F.

    2006-08-01

    A numerical algorithm to simulate chemotactic and/or diffusive migration on a one-dimensional growing domain is developed. The domain growth can be spatially nonuniform and the growth-derived advection term must be discretised. The hyperbolic terms in the conservation equations associated with chemotactic migration and domain growth are accurately discretised using an explicit central scheme. Generality of the algorithm is maintained using an operator split technique to simulate diffusive migration implicitly. The resulting algorithm is applicable for any combination of diffusive and/or chemotactic migration on a growing domain with a general growth-induced velocity field. The accuracy of the algorithm is demonstrated by testing the results against some simple analytical solutions and in an inter-code comparison. The new algorithm demonstrates that the form of nonuniform growth plays a critical role in determining whether a population of migratory cells is able to overcome the domain growth and fully colonise the domain.

  16. Numerical algorithms for the direct spectral transform with applications to nonlinear Schroedinger type systems

    SciTech Connect

    Burtsev, S.; Camassa, R.; Timofeyev, I.

    1998-11-20

    The authors implement two different algorithms for computing numerically the direct Zakharov-Shabat eigenvalue problem on the infinite line. The first algorithm replaces the potential in the eigenvalue problem by a piecewise-constant approximation, which allows one to solve analytically the corresponding ordinary differential equation. The resulting algorithm is of second order in the step size. The second algorithm uses the fourth-order Runge-Kutta method. They test and compare the performance of these two algorithms on three exactly solvable potentials. They find that even though the Runge-Kutta method is of higher order, this extra accuracy can be lost because of the additional dependence of its numerical error on the eigenvalue. this limits the usefulness of the Runge-Kutta algorithm to a region inside the unit circle around the origin in the complex plane of the eigenvalues. For the computation of the continuous spectrum density, this limitation is particularly severe, as revealed by the spectral decomposition of the L{sup 2}-norm of a solution to the nonlinear Schroedinger equation. They show that no such limitations exist for the piecewise-constant algorithm. In particular, this scheme converges uniformly for both continuous and discrete spectrum components.

  17. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  18. Advances in Satellite Microwave Precipitation Retrieval Algorithms Over Land

    NASA Astrophysics Data System (ADS)

    Wang, N. Y.; You, Y.; Ferraro, R. R.

    2015-12-01

    Precipitation plays a key role in the earth's climate system, particularly in the aspect of its water and energy balance. Satellite microwave (MW) observations of precipitation provide a viable mean to achieve global measurement of precipitation with sufficient sampling density and accuracy. However, accurate precipitation information over land from satellite MW is a challenging problem. The Goddard Profiling Algorithm (GPROF) algorithm for the Global Precipitation Measurement (GPM) is built around the Bayesian formulation (Evans et al., 1995; Kummerow et al., 1996). GPROF uses the likelihood function and the prior probability distribution function to calculate the expected value of precipitation rate, given the observed brightness temperatures. It is particularly convenient to draw samples from a prior PDF from a predefined database of observations or models. GPROF algorithm does not search all database entries but only the subset thought to correspond to the actual observation. The GPM GPROF V1 database focuses on stratification by surface emissivity class, land surface temperature and total precipitable water. However, there is much uncertainty as to what is the optimal information needed to subset the database for different conditions. To this end, we conduct a database stratification study of using National Mosaic and Multi-Sensor Quantitative Precipitation Estimation, Special Sensor Microwave Imager/Sounder (SSMIS) and Advanced Technology Microwave Sounder (ATMS) and reanalysis data from Modern-Era Retrospective Analysis for Research and Applications (MERRA). Our database study (You et al., 2015) shows that environmental factors such as surface elevation, relative humidity, and storm vertical structure and height, and ice thickness can help in stratifying a single large database to smaller and more homogeneous subsets, in which the surface condition and precipitation vertical profiles are similar. It is found that the probability of detection (POD) increases

  19. Fundamental form of the electrostatic δf-PIC algorithm and discovery of a converged numerical instability

    NASA Astrophysics Data System (ADS)

    Wilkie, George J.; Dorland, William

    2016-05-01

    The δf particle-in-cell algorithm has been a useful tool in studying the physics of plasmas, particularly turbulent magnetized plasmas in the context of gyrokinetics. The reduction in noise due to not having to resolve the full distribution function indicates an efficiency advantage over the standard ("full-f") particle-in-cell. Despite its successes, the algorithm behaves strangely in some circumstances. In this work, we document a fully resolved numerical instability that occurs in the simplest of multiple-species test cases: the electrostatic ΩH mode. There is also a poorly understood numerical instability that occurs when one is under-resolved in particle number, which may require a prohibitively large number of particles to stabilize. Both of these are independent of the time-stepping scheme, and we conclude that they exist if the time advancement were exact. The exact analytic form of the algorithm is presented, and several schemes for mitigating these instabilities are also presented.

  20. Advanced Fuel Cycle Economic Tools, Algorithms, and Methodologies

    SciTech Connect

    David E. Shropshire

    2009-05-01

    The Advanced Fuel Cycle Initiative (AFCI) Systems Analysis supports engineering economic analyses and trade-studies, and requires a requisite reference cost basis to support adequate analysis rigor. In this regard, the AFCI program has created a reference set of economic documentation. The documentation consists of the “Advanced Fuel Cycle (AFC) Cost Basis” report (Shropshire, et al. 2007), “AFCI Economic Analysis” report, and the “AFCI Economic Tools, Algorithms, and Methodologies Report.” Together, these documents provide the reference cost basis, cost modeling basis, and methodologies needed to support AFCI economic analysis. The application of the reference cost data in the cost and econometric systems analysis models will be supported by this report. These methodologies include: the energy/environment/economic evaluation of nuclear technology penetration in the energy market—domestic and internationally—and impacts on AFCI facility deployment, uranium resource modeling to inform the front-end fuel cycle costs, facility first-of-a-kind to nth-of-a-kind learning with application to deployment of AFCI facilities, cost tradeoffs to meet nuclear non-proliferation requirements, and international nuclear facility supply/demand analysis. The economic analysis will be performed using two cost models. VISION.ECON will be used to evaluate and compare costs under dynamic conditions, consistent with the cases and analysis performed by the AFCI Systems Analysis team. Generation IV Excel Calculations of Nuclear Systems (G4-ECONS) will provide static (snapshot-in-time) cost analysis and will provide a check on the dynamic results. In future analysis, additional AFCI measures may be developed to show the value of AFCI in closing the fuel cycle. Comparisons can show AFCI in terms of reduced global proliferation (e.g., reduction in enrichment), greater sustainability through preservation of a natural resource (e.g., reduction in uranium ore depletion), value from

  1. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  2. Advanced optimization of permanent magnet wigglers using a genetic algorithm

    SciTech Connect

    Hajima, Ryoichi

    1995-12-31

    In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.

  3. Recent advances in two-phase flow numerics

    SciTech Connect

    Mahaffy, J.H.; Macian, R.

    1997-07-01

    The authors review three topics in the broad field of numerical methods that may be of interest to individuals modeling two-phase flow in nuclear power plants. The first topic is iterative solution of linear equations created during the solution of finite volume equations. The second is numerical tracking of macroscopic liquid interfaces. The final area surveyed is the use of higher spatial difference techniques.

  4. A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients

    SciTech Connect

    Alex, Arne; Delft, Jan von; Kalus, Matthias; Huckleberry, Alan

    2011-02-15

    We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).

  5. Dynamics analysis of electrodynamic satellite tethers. Equations of motion and numerical solution algorithms for the tether

    NASA Technical Reports Server (NTRS)

    Nacozy, P. E.

    1984-01-01

    The equations of motion are developed for a perfectly flexible, inelastic tether with a satellite at its extremity. The tether is attached to a space vehicle in orbit. The tether is allowed to possess electrical conductivity. A numerical solution algorithm to provide the motion of the tether and satellite system is presented. The resulting differential equations can be solved by various existing standard numerical integration computer programs. The resulting differential equations allow the introduction of approximations that can lead to analytical, approximate general solutions. The differential equations allow more dynamical insight of the motion.

  6. Analysis of V-cycle multigrid algorithms for forms defined by numerical quadrature

    SciTech Connect

    Bramble, J.H. . Dept. of Mathematics); Goldstein, C.I.; Pasciak, J.E. . Applied Mathematics Dept.)

    1994-05-01

    The authors describe and analyze certain V-cycle multigrid algorithms with forms defined by numerical quadrature applied to the approximation of symmetric second-order elliptic boundary value problems. This approach can be used for the efficient solution of finite element systems resulting from numerical quadrature as well as systems arising from finite difference discretizations. The results are based on a regularity free theory and hence apply to meshes with local grid refinement as well as the quasi-uniform case. It is shown that uniform (independent of the number of levels) convergence rates often hold for appropriately defined V-cycle algorithms with as few as one smoothing per grid. These results hold even on applications without full elliptic regularity, e.g., a domain in R[sup 2] with a crack.

  7. Particle-In-Cell Multi-Algorithm Numerical Test-Bed

    NASA Astrophysics Data System (ADS)

    Meyers, M. D.; Yu, P.; Tableman, A.; Decyk, V. K.; Mori, W. B.

    2015-11-01

    We describe a numerical test-bed that allows for the direct comparison of different numerical simulation schemes using only a single code. It is built from the UPIC Framework, which is a set of codes and modules for constructing parallel PIC codes. In this test-bed code, Maxwell's equations are solved in Fourier space in two dimensions. One can readily examine the numerical properties of a real space finite difference scheme by including its operators' Fourier space representations in the Maxwell solver. The fields can be defined at the same location in a simulation cell or can be offset appropriately by half-cells, as in the Yee finite difference time domain scheme. This allows for the accurate comparison of numerical properties (dispersion relations, numerical stability, etc.) across finite difference schemes, or against the original spectral scheme. We have also included different options for the charge and current deposits, including a strict charge conserving current deposit. The test-bed also includes options for studying the analytic time domain scheme, which eliminates numerical dispersion errors in vacuum. We will show examples from the test-bed that illustrate how the properties of some numerical instabilities vary between different PIC algorithms. Work supported by the NSF grant ACI 1339893 and DOE grant DE-SC0008491.

  8. Numerical advection algorithms and their role in atmospheric transport and chemistry models

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.

    1987-01-01

    During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.

  9. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  10. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  11. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  12. Numerical algorithms for steady and unsteady incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Hafez, Mohammed; Dacles, Jennifer

    1989-01-01

    The numerical analysis of the incompressible Navier-Stokes equations are becoming important tools in the understanding of some fluid flow problems which are encountered in research as well as in industry. With the advent of the supercomputers, more realistic problems can be studied with a wider choice of numerical algorithms. An alternative formulation is presented for viscous incompressible flows. The incompressible Navier-Stokes equations are cast in a velocity/vorticity formulation. This formulation consists of solving the Poisson equations for the velocity components and the vorticity transport equation. Two numerical algorithms for the steady two-dimensional laminar flows are presented. The first method is based on the actual partial differential equations. This uses a finite-difference approximation of the governing equations on a staggered grid. The second method uses a finite element discretization with the vorticity transport equation approximated using a Galerkin approximation and the Poisson equations are obtained using a least squares method. The equations are solved efficiently using Newton's method and a banded direct matrix solver (LINPACK). The method is extended to steady three-dimensional laminar flows and applied to a cubic driven cavity using finite difference schemes and a staggered grid arrangement on a Cartesian mesh. The equations are solved iteratively using a plane zebra relaxation scheme. Currently, a two-dimensional, unsteady algorithm is being developed using a generalized coordinate system. The equations are discretized using a finite-volume approach. This work will then be extended to three-dimensional flows.

  13. The Role of Numerical Simulation in Advancing Plasma Propulsion

    NASA Astrophysics Data System (ADS)

    Turchi, P. J.; Mikellides, P. G.; Mikellides, I. G.

    1999-11-01

    Plasma thrusters often involve a complex set of interactions among several distinct physical processes. While each process can yield to separate mathematical representation, their combination generally requires numerical simulation. We have extended and used the MACH2 code successfully to simulate both self-field and applied-field magnetoplasmadynamic thrusters and, more recently, ablation-fed pulsed plasma microthrusters. MACH2 provides a framework in which to compute 2-1/2 dimensional, unsteady, MHD flows in two-temperature LTE. It couples to several options for electrical circuitry and allows access to both analytic formulas and tabular values for material properties and transport coefficients, including phenomenological models for anomalous transport. Even with all these capabilities, however, successful modeling demands comparison with experiment and with analytic solutions in idealized limits, and careful combination of MACH2 results with separate physical reasoning. Although well understood elsewhere in plasma physics, the strengths and limitations of numerical simulation for plasma propulsion needs further discussion.

  14. Parametric effects of CFL number and artificial smoothing on numerical solutions using implicit approximate factorization algorithm

    NASA Technical Reports Server (NTRS)

    Daso, E. O.

    1986-01-01

    An implicit approximate factorization algorithm is employed to quantify the parametric effects of Courant number and artificial smoothing on numerical solutions of the unsteady 3-D Euler equations for a windmilling propeller (low speed) flow field. The results show that propeller global or performance chracteristics vary strongly with Courant number and artificial dissipation parameters, though the variation is such less severe at high Courant numbers. Candidate sets of Courant number and dissipation parameters could result in parameter-dependent solutions. Parameter-independent numerical solutions can be obtained if low values of the dissipation parameter-time step ratio are used in the computations. Furthermore, it is realized that too much artificial damping can degrade numerical stability. Finally, it is demonstrated that highly resolved meshes may, in some cases, delay convergence, thereby suggesting some optimum cell size for a given flow solution. It is suspected that improper boundary treatment may account for the cell size constraint.

  15. Coordinate Systems, Numerical Objects and Algorithmic Operations of Computational Experiment in Fluid Mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily

    2016-02-01

    The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.

  16. Numerical Simulations and Optimisation in Forming of Advanced Materials

    NASA Astrophysics Data System (ADS)

    Huétink, J.

    2007-04-01

    With the introduction of new materials as high strength steels, metastable steels and fiber reinforce composites, the need for advanced physically valid constitutive models arises. A biaxial test equipment is developed and applied for the determination of material data as well as for validation of material models. An adaptive through- thickness integration scheme for plate elements is developed, which improves the accuracy of spring back prediction at minimal costs. An optimization strategy is proposed that assists an engineer to model an optimization problem.

  17. Advanced Algorithms for Local Routing Strategy on Complex Networks.

    PubMed

    Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K; Dong, Chuanfei; Miao, Lixin; Wang, Binghong

    2016-01-01

    Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70-90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502

  18. Advanced Algorithms for Local Routing Strategy on Complex Networks

    PubMed Central

    Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K.; Dong, Chuanfei; Miao, Lixin; Wang, Binghong

    2016-01-01

    Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502

  19. Numerical algorithms for computations of feedback laws arising in control of flexible systems

    NASA Technical Reports Server (NTRS)

    Lasiecka, Irena

    1989-01-01

    Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.

  20. Basic and Advanced Numerical Performances Relate to Mathematical Expertise but Are Fully Mediated by Visuospatial Skills

    PubMed Central

    2016-01-01

    Recent studies have highlighted the potential role of basic numerical processing in the acquisition of numerical and mathematical competences. However, it is debated whether high-level numerical skills and mathematics depends specifically on basic numerical representations. In this study mathematicians and nonmathematicians performed a basic number line task, which required mapping positive and negative numbers on a physical horizontal line, and has been shown to correlate with more advanced numerical abilities and mathematical achievement. We found that mathematicians were more accurate compared with nonmathematicians when mapping positive, but not negative numbers, which are considered numerical primitives and cultural artifacts, respectively. Moreover, performance on positive number mapping could predict whether one is a mathematician or not, and was mediated by more advanced mathematical skills. This finding might suggest a link between basic and advanced mathematical skills. However, when we included visuospatial skills, as measured by block design subtest, the mediation analysis revealed that the relation between the performance in the number line task and the group membership was explained by non-numerical visuospatial skills. These results demonstrate that relation between basic, even specific, numerical skills and advanced mathematical achievement can be artifactual and explained by visuospatial processing. PMID:26913930

  1. Advanced information processing system: Hosting of advanced guidance, navigation and control algorithms on AIPS using ASTER

    NASA Technical Reports Server (NTRS)

    Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John

    1994-01-01

    This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.

  2. Advanced metaheuristic algorithms for laser optimization in optical accelerator technologies

    NASA Astrophysics Data System (ADS)

    Tomizawa, Hiromitsu

    2011-10-01

    Lasers are among the most important experimental tools for user facilities, including synchrotron radiation and free electron lasers (FEL). In the synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for X-ray-FELs, lasers play important roles as seed light sources or photocathode-illuminating light sources to generate a high-brightness electron bunch. For future accelerators, laser-based techonologies such as electro-optic (EO) sampling to measure ultra-short electron bunches and optical-fiber-based femtosecond timing systems have been intensively developed in the last decade. Therefore, controls and optimizations of laser pulse characteristics are strongly required for many kinds of experiments and improvement of accelerator systems. However, people believe that lasers should be tuned and customized for each requirement manually by experts. This makes it difficult for laser systems to be part of the common accelerator infrastructure. Automatic laser tuning requires sophisticated algorithms, and the metaheuristic algorithm is one of the best solutions. The metaheuristic laser tuning system is expected to reduce the human effort and time required for laser preparations. I have shown some successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles, and a hill-climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each machine requirement.

  3. Numerical stability of relativistic beam multidimensional PIC simulations employing the Esirkepov algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc

    2013-09-01

    Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.

  4. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization

    PubMed Central

    Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  5. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.

    PubMed

    Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  6. Comparative Study of Algorithms for the Numerical Simulation of Lattice QCD

    SciTech Connect

    Luz, Fernando H. P.; Mendes, Tereza

    2010-11-12

    Large-scale numerical simulations are the prime method for a nonperturbative study of QCD from first principles. Although the lattice simulation of the pure-gauge (or quenched-QCD) case may be performed very efficiently on parallel machines, there are several additional difficulties in the simulation of the full-QCD case, i.e. when dynamical quark effects are taken into account. We discuss the main aspects of full-QCD simulations, describing the most common algorithms. We present a comparative analysis of performance for two versions of the hybrid Monte Carlo method (the so-called R and RHMC algorithms), as provided in the MILC software package. We consider two degenerate flavors of light quarks in the staggered formulation, having in mind the case of finite-temperature QCD.

  7. A semi-numerical algorithm for instability of compressible multilayered structures

    NASA Astrophysics Data System (ADS)

    Tang, Shan; Yang, Yang; Peng, Xiang He; Liu, Wing Kam; Huang, Xiao Xu; Elkhodary, Khalil

    2015-07-01

    A computational method is proposed for the analysis and prediction of instability (wrinkling or necking) of multilayered compressible plates and sheets made by metals or polymers under plane strain conditions. In previous works, a basic assumption (or a physical argument) that has been frequently made is that materials are incompressible to simplify mathematical derivations. To account for the compressibility of metals and polymers (the lower Poisson's ratio leads to the more compressible material), we propose a combined semi-numerical algorithm and finite element method for instability analysis. Our proposed algorithm is herein verified by comparing its predictions with published results in literature for thin films with polymer/metal substrates and for polymer/metal systems. The new combined method is then used to predict the effects of compressibility on instability behaviors. Results suggest potential utility for compressibility in the design of multilayered structures.

  8. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

    PubMed Central

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-01-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

  9. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  10. A numerical technique for calculation of the noise of high-speed propellers with advanced blade geometry

    NASA Technical Reports Server (NTRS)

    Nystrom, P. A.; Farassat, F.

    1980-01-01

    A numerical technique and computer program were developed for the prediction of the noise of propellers with advanced geometry. The blade upper and lower surfaces are described by a curvilinear coordinate system, which was also used to divide the blade surfaces into panels. Two different acoustic formulations in the time domain were used to improve the speed and efficiency of the noise calculations: an acoustic formualtion with the Doppler factor singularity for panels moving at subsonic speeds and the collapsing sphere formulation for panels moving at transonic or supersonic speeds. This second formulation involves a sphere which is centered at the observer position and whose radius decreases at the speed of sound. The acoustic equation consisted of integrals over the curve of intersection for both the sphere and the panels on the blade. Algorithms used in some parts of the computer program are discussed. Comparisons with measured acoustic data for two model high speed propellers with advanced geometry are also presented.

  11. New Concepts in Breast Cancer Emerge from Analyzing Clinical Data Using Numerical Algorithms

    PubMed Central

    Retsky, Michael

    2009-01-01

    A small international group has recently challenged fundamental concepts in breast cancer. As a guiding principle in therapy, it has long been assumed that breast cancer growth is continuous. However, this group suggests tumor growth commonly includes extended periods of quasi-stable dormancy. Furthermore, surgery to remove the primary tumor often awakens distant dormant micrometastases. Accordingly, over half of all relapses in breast cancer are accelerated in this manner. This paper describes how a numerical algorithm was used to come to these conclusions. Based on these findings, a dormancy preservation therapy is proposed. PMID:19440287

  12. Numerical arc segmentation algorithm for a radio conference - A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    A detailed description of a Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software package for communication satellite systems planning is presented. This software provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC - 88) on the use of the GEO and the planning of space services utilizing GEO. The features of the NASARC software package are described, and detailed information is given about the function of each of the four NASARC program modules. The results of a sample world scenario are presented and discussed.

  13. A Flexible Reservation Algorithm for Advance Network Provisioning

    SciTech Connect

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-04-12

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.

  14. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  15. Delicate visual artifacts of advanced digital video processing algorithms

    NASA Astrophysics Data System (ADS)

    Nicolas, Marina M.; Lebowsky, Fritz

    2005-03-01

    With the incoming of digital TV, sophisticated video processing algorithms have been developed to improve the rendering of motion or colors. However, the perceived subjective quality of these new systems sometimes happens to be in conflict with the objective measurable improvement we expect to get. In this presentation, we show examples where algorithms should visually improve the skin tone rendering of decoded pictures under normal conditions, but surprisingly fail, when the quality of mpeg encoding drops below a just noticeable threshold. In particular, we demonstrate that simple objective criteria used for the optimization, such as SAD, PSNR or histogram sometimes fail, partly because they are defined on a global scale, ignoring local characteristics of the picture content. We then integrate a simple human visual model to measure potential artifacts with regard to spatial and temporal variations of the objects' characteristics. Tuning some of the model's parameters allows correlating the perceived objective quality with compression metrics of various encoders. We show the evolution of our reference parameters in respect to the compression ratios. Finally, using the output of the model, we can control the parameters of the skin tone algorithm to reach an improvement in overall system quality.

  16. Block sparse Cholesky algorithms on advanced uniprocessor computers

    SciTech Connect

    Ng, E.G.; Peyton, B.W.

    1991-12-01

    As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.

  17. Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm

    NASA Technical Reports Server (NTRS)

    Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.

    1990-01-01

    Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  18. Shock focusing flow field simulated by a high-resolution numerical algorithm

    NASA Astrophysics Data System (ADS)

    Jung, Y. G.; Chang, K. S.

    2012-11-01

    Shock-focusing concave reflector is a very simple and effective tool to obtain a high-pressure pulse wave near the physical focal point. In the past, many optical images were obtained through experimental studies. However, measurement of field variables is not easy because the phenomenon is of short duration and the magnitude of shock waves is varied from pulse to pulse due to poor reproducibility. Using a wave propagation algorithm and the Cartesian embedded boundary method, we have successfully obtained numerical schlieren images that resemble the experimental results. By the numerical results, various field variables, such as pressure, density and vorticity, become available for the better understanding and design of shock focusing devices.

  19. A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries

    SciTech Connect

    Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P

    2003-12-15

    We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.

  20. Advanced entry guidance algorithm with landing footprint computation

    NASA Astrophysics Data System (ADS)

    Leavitt, James Aaron

    The design and performance evaluation of an entry guidance algorithm for future space transportation vehicles is presented. The algorithm performs two functions: on-board trajectory planning and trajectory tracking. The planned longitudinal path is followed by tracking drag acceleration, as is done by the Space Shuttle entry guidance. Unlike the Shuttle entry guidance, lateral path curvature is also planned and followed. A new trajectory planning function for the guidance algorithm is developed that is suitable for suborbital entry and that significantly enhances the overall performance of the algorithm for both orbital and suborbital entry. In comparison with the previous trajectory planner, the new planner produces trajectories that are easier to track, especially near the upper and lower drag boundaries and for suborbital entry. The new planner accomplishes this by matching the vehicle's initial flight path angle and bank angle, and by enforcing the full three-degree-of-freedom equations of motion with control derivative limits. Insights gained from trajectory optimization results contribute to the design of the new planner, giving it near-optimal downrange and crossrange capabilities. Planned trajectories and guidance simulation results are presented that demonstrate the improved performance. Based on the new planner, a method is developed for approximating the landing footprint for entry vehicles in near real-time, as would be needed for an on-board flight management system. The boundary of the footprint is constructed from the endpoints of extreme downrange and crossrange trajectories generated by the new trajectory planner. The footprint algorithm inherently possesses many of the qualities of the new planner, including quick execution, the ability to accurately approximate the vehicle's glide capabilities, and applicability to a wide range of entry conditions. Footprints can be generated for orbital and suborbital entry conditions using a pre

  1. Highly efficient numerical algorithm based on random trees for accelerating parallel Vlasov-Poisson simulations

    NASA Astrophysics Data System (ADS)

    Acebrón, Juan A.; Rodríguez-Rozas, Ángel

    2013-10-01

    An efficient numerical method based on a probabilistic representation for the Vlasov-Poisson system of equations in the Fourier space has been derived. This has been done theoretically for arbitrary dimensional problems, and particularized to unidimensional problems for numerical purposes. Such a representation has been validated theoretically in the linear regime comparing the solution obtained with the classical results of the linear Landau damping. The numerical strategy followed requires generating suitable random trees combined with a Padé approximant for approximating accurately a given divergent series. Such series are obtained by summing the partial contributions to the solution coming from trees with arbitrary number of branches. These contributions, coming in general from multi-dimensional definite integrals, are efficiently computed by a quasi-Monte Carlo method. It is shown how the accuracy of the method can be effectively increased by considering more terms of the series. The new representation was used successfully to develop a Probabilistic Domain Decomposition method suited for massively parallel computers, which improves the scalability found in classical methods. Finally, a few numerical examples based on classical phenomena such as the non-linear Landau damping, and the two streaming instability are given, illustrating the remarkable performance of the algorithm, when compared the results with those obtained using a classical method.

  2. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  3. Computer architecture for efficient algorithmic executions in real-time systems: new technology for avionics systems and advanced space vehicles

    SciTech Connect

    Carroll, C.C.; Youngblood, J.N.; Saha, A.

    1987-12-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  4. Thermodynamically Consistent Physical Formulation and an Efficient Numerical Algorithm for Incompressible N-Phase Flows

    NASA Astrophysics Data System (ADS)

    Dong, Suchuan

    2015-11-01

    This talk focuses on simulating the motion of a mixture of N (N>=2) immiscible incompressible fluids with given densities, dynamic viscosities and pairwise surface tensions. We present an N-phase formulation within the phase field framework that is thermodynamically consistent, in the sense that the formulation satisfies the conservations of mass/momentum, the second law of thermodynamics and Galilean invariance. We also present an efficient algorithm for numerically simulating the N-phase system. The algorithm has overcome the issues caused by the variable coefficient matrices associated with the variable mixture density/viscosity and the couplings among the (N-1) phase field variables and the flow variables. We compare simulation results with the Langmuir-de Gennes theory to demonstrate that the presented method produces physically accurate results for multiple fluid phases. Numerical experiments will be presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts to demonstrate the capabilities of the method for studying the interactions among multiple types of fluid interfaces. Support from NSF and ONR is gratefully acknowledged.

  5. New Advances in the Study of the Proximal Point Algorithm

    NASA Astrophysics Data System (ADS)

    Moroşanu, Gheorghe

    2010-09-01

    Consider in a real Hilbert space H the inexact, Halpern-type, proximal point algorithm xn+1 = αnu+(1-αn)Jβnxn+en, n = 0,1,…, (H—PPA) where u, x∈H are given points, Jβn = (I+βna) for a given maximal monotone operator A, and (en) is the error sequence, under new assumptions on αn∈(0,1) and βn∈(0,1). Several strong convergence results for the H—PPA are presented under the general condition that the error sequence converges strongly to zero, thus improving the classical Rockafellar's summability condition on (‖en‖) that has been extensively used so far for different versions of the proximal point algorithm. Our results extend and improve some recent ones. These results can be applied to approximate minimizers of convex functionals. Convergence rate estimates are established for a sequence approximating the minimum value of such a functional.

  6. Fast numerical algorithms for fitting multiresolution hybrid shape models to brain MRI.

    PubMed

    Vemuri, B C; Guo, Y; Lai, S H; Leonard, C M

    1997-09-01

    In this paper, we present new and fast numerical algorithms for shape recovery from brain MRI using multiresolution hybrid shape models. In this modeling framework, shapes are represented by a core rigid shape characterized by a superquadric function and a superimposed displacement function which is characterized by a membrane spline discretized using the finite-element method. Fitting the model to brain MRI data is cast as an energy minimization problem which is solved numerically. We present three new computational methods for model fitting to data. These methods involve novel mathematical derivations that lead to efficient numerical solutions of the model fitting problem. The first method involves using the nonlinear conjugate gradient technique with a diagonal Hessian preconditioner. The second method involves the nonlinear conjugate gradient in the outer loop for solving global parameters of the model and a preconditioned conjugate gradient scheme for solving the local parameters of the model. The third method involves the nonlinear conjugate gradient in the outer loop for solving the global parameters and a combination of the Schur complement formula and the alternating direction-implicit method for solving the local parameters of the model. We demonstrate the efficiency of our model fitting methods via experiments on several MR brain scans. PMID:9873915

  7. An advanced image fusion algorithm based on wavelet transform: incorporation with PCA and morphological processing

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Essock, Edward A.; Hansen, Bruce C.

    2004-05-01

    There are numerous applications for image fusion, some of which include medical imaging, remote sensing, nighttime operations and multi-spectral imaging. In general, the discrete wavelet transform (DWT) and various pyramids (such as Laplacian, ratio, contrast, gradient and morphological pyramids) are the most common and effective methods. For quantitative evaluation of the quality of fused imagery, the root mean square error (RMSE) is the most suitable measure of quality if there is a "ground truth" image available; otherwise, the entropy, spatial frequency or image quality index of the input images and the fused images can be calculated and compared. Here, after analyzing the pyramids" performance with the four measures mentioned, an advanced wavelet transform (aDWT) method that incorporates principal component analysis (PCA) and morphological processing into a regular DWT fusion algorithm is presented. Specifically, at each scale of the wavelet transformed images, a principle vector was derived from two input images and then applied to two of the images" approximation coefficients (i.e., they were fused by using the principal eigenvector). For the detail coefficients (i.e., three quarters of the coefficients), the larger absolute values were chosen and subjected to a neighborhood morphological processing procedure which served to verify the selected pixels by using a "filling" and "cleaning" operation (this operation filled or removed isolated pixels in a 3-by-3 local region). The fusion performance of the advanced DWT (aDWT) method proposed here was compared with six other common methods, and, based on the four quantitative measures, was found to perform the best when tested on the four input image types. Since the different image sources used here varied with respect to intensity, contrast, noise, and intrinsic characteristics, the aDWT is a promising image fusion procedure for inhomogeneous imagery.

  8. Performance of an advanced lump correction algorithm for gamma-ray assays of plutonium

    SciTech Connect

    Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.

    1994-08-01

    The results of an experimental study to evaluate the performance of an advanced lump correction algorithm for gamma-ray assays of plutonium is presented. The algorithm is applied to correct segmented gamma scanner (SGS) and tomographic gamma scanner (TGS) assays of plutonium samples in 55-gal. drums containing heterogeneous matrices. The relative ability of the SGS and TGS to separate matrix and lump effects is examined, and a technique to detect gross heterogeneity in SGS assays is presented.

  9. Advances in contact algorithms and their application to tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.

    1988-01-01

    Currently used techniques for tire contact analysis are reviewed. Discussion focuses on the different techniques used in modeling frictional forces and the treatment of contact conditions. A status report is presented on a new computational strategy for the modeling and analysis of tires, including the solution of the contact problem. The key elements of the proposed strategy are: (1) use of semianalytic mixed finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) use of perturbed Lagrangian formulation for the determination of the contact area and pressure; and (3) application of multilevel iterative procedures and reduction techniques to generate the response of the tire. Numerical results are presented to demonstrate the effectiveness of a proposed procedure for generating the tire response associated with different Fourier harmonics.

  10. Environmental Monitoring Networks Optimization Using Advanced Active Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Volpi, Michele; Copa, Loris

    2010-05-01

    The problem of environmental monitoring networks optimization (MNO) belongs to one of the basic and fundamental tasks in spatio-temporal data collection, analysis, and modeling. There are several approaches to this problem, which can be considered as a design or redesign of monitoring network by applying some optimization criteria. The most developed and widespread methods are based on geostatistics (family of kriging models, conditional stochastic simulations). In geostatistics the variance is mainly used as an optimization criterion which has some advantages and drawbacks. In the present research we study an application of advanced techniques following from the statistical learning theory (SLT) - support vector machines (SVM) and the optimization of monitoring networks when dealing with a classification problem (data are discrete values/classes: hydrogeological units, soil types, pollution decision levels, etc.) is considered. SVM is a universal nonlinear modeling tool for classification problems in high dimensional spaces. The SVM solution is maximizing the decision boundary between classes and has a good generalization property for noisy data. The sparse solution of SVM is based on support vectors - data which contribute to the solution with nonzero weights. Fundamentally the MNO for classification problems can be considered as a task of selecting new measurement points which increase the quality of spatial classification and reduce the testing error (error on new independent measurements). In SLT this is a typical problem of active learning - a selection of the new unlabelled points which efficiently reduce the testing error. A classical approach (margin sampling) to active learning is to sample the points closest to the classification boundary. This solution is suboptimal when points (or generally the dataset) are redundant for the same class. In the present research we propose and study two new advanced methods of active learning adapted to the solution of

  11. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    NASA Astrophysics Data System (ADS)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate

  12. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    SciTech Connect

    Dong, S.

    2015-02-15

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  13. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  14. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  15. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  16. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  17. A Straightforward Method for Advance Estimation of User Charges for Information in Numeric Databases.

    ERIC Educational Resources Information Center

    Jarvelin, Kalervo

    1986-01-01

    Describes a method for advance estimation of user charges for queries in relational data model-based numeric databases when charges are based on data retrieved. Use of this approach is demonstrated by sample queries to an imaginary marketing database. The principles and methods of this approach and its relevance are discussed. (MBR)

  18. BOOK REVIEW: Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    NASA Astrophysics Data System (ADS)

    Katsaounis, T. D.

    2005-02-01

    The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using

  19. Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    NASA Astrophysics Data System (ADS)

    Kitaura, F. S.; Enßlin, T. A.

    2008-09-01

    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.

  20. Advanced Numerical Imaging Procedure Accounting for Non-Ideal Effects in GPR Scenarios

    NASA Astrophysics Data System (ADS)

    Comite, Davide; Galli, Alessandro; Catapano, Ilaria; Soldovieri, Francesco

    2015-04-01

    advanced implementation have also been tested by introducing 'errors' on the knowledge of the background medium permittivity, by simulating the presence of one or more layers, and by choosing different models of the surface roughness. The impact of these issues on the performance of both the conventional procedure and the advanced one will be extensively highlighted and discussed at the conference. [1] G. Valerio et al., "GPR detectability of rocks in a Martian-like shallow subsoil: A numerical approach," Plan. Sp. Sci., vol. 62, pp. 31-40, 2012. [2] A. Galli et al., "3D imaging of buried dielectric targets with a tomographic microwave approach applied to GPR synthetic data," Int. J. Antennas Propag., art. ID 610389, 10 pp., 2013 [3] F. Soldovieri et al., "A linear inverse scattering algorithm for realistic GPR applications," Near Surface Geophysics, 5 (1), pp. 29-42, 2007.

  1. Numerical Investigation of a Cascaded Longitudinal Space-Charge Amplifier at the Fermilab's Advanced Superconducting Test Accelerator

    SciTech Connect

    Halavanau, A.; Piot, P.

    2015-06-01

    In a cascaded longitudinal space-charge amplifier (LSCA), initial density noise in a relativistic e-beam is amplified via the interplay of longitudinal space charge forces and properly located dispersive sections. This type of amplification process was shown to potentially result in large final density modulations [1] compatible with the production of broadband electromagnetic radiation. The technique was recently demonstrated in the optical domain [2]. In this paper we investigate, via numerical simulations, the performances of a cascaded LSCA beamline at the Fermilab’s Advanced Superconducting Test Accelerator (ASTA). We especially explore the properties of the produced broadband radiation. Our studies have been conducted with a grid-less three-dimensional space-charge algorithm.

  2. Study on sub-cycling algorithm for flexible multi-body system: stability analysis and numerical examples

    NASA Astrophysics Data System (ADS)

    Miao, J. C.; Zhu, P.; Shi, G. L.; Chen, G. L.

    2008-01-01

    Numerical stability is an important issue for any integral procedure. Since sub-cycling algorithm has been presented by Belytschko et al. (Comput Methods Appl Mech Eng 17/18: 259-275, 1979), various kinds of these integral procedures were developed in later 20 years and their stability were widely studied. However, on how to apply the sub-cycling to flexible multi-body dynamics (FMD) is still a lack of investigation up to now. A particular sub-cycling algorithm for the FMD based on the central difference method was introduced in detail in part I (Miao et al. in Comp Mech doi: 10.1007/s00466-007-0183-9) of this paper. Adopting an integral approximation operator method, stability of the presented algorithm is transformed to a generalized eigenvalue problem in the paper and is discussed by solving the problem later. Numerical examples are performed to verify the availability and efficiency of the algorithm further.

  3. A modeling and numerical algorithm for thermoporomechanics in multiple porosity media for naturally fractured reservoirs

    NASA Astrophysics Data System (ADS)

    Kim, J.; Sonnenthal, E. L.; Rutqvist, J.

    2011-12-01

    Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator

  4. Springback Simulation: Impact of Some Advanced Constitutive Models and Numerical Parameters

    NASA Astrophysics Data System (ADS)

    Haddag, Badis; Balan, Tudor; Abed-Meraim, Farid

    2005-08-01

    The impact of material models on the numerical simulation of springback is investigated. The study is focused on the strain-path sensitivity of two hardening models. While both models predict the Bauschinger effect, their response in the transient zone after a strain-path change is fairly different. Their respective predictions are compared in terms of sequential test response and of strip-drawing springback. For this purpose, an accurate and general time integration algorithm has been developed and implemented in the Abaqus code. The impact of several numerical parameters is also studied in order to assess the overall accuracy of the finite element prediction. For some test geometries, both material and numerical parameters are shown to clearly influence the springback behavior at a large extent. Moreover, a general trend cannot always be extracted, thus justifying the need for the finite element simulation of the stamping process.

  5. Advanced synthetic image generation models and their application to multi/hyperspectral algorithm development

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary

    1999-01-01

    The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.

  6. Biphasic indentation of articular cartilage--II. A numerical algorithm and an experimental study.

    PubMed

    Mow, V C; Gibbs, M C; Lai, W M; Zhu, W B; Athanasiou, K A

    1989-01-01

    Part I (Mak et al., 1987, J. Biomechanics 20, 703-714) presented the theoretical solutions for the biphasic indentation of articular cartilage under creep and stress-relaxation conditions. In this study, using the creep solution, we developed an efficient numerical algorithm to compute all three material coefficients of cartilage in situ on the joint surface from the indentation creep experiment. With this method we determined the average values of the aggregate modulus. Poisson's ratio and permeability for young bovine femoral condylar cartilage in situ to be HA = 0.90 MPa, vs = 0.39 and k = 0.44 x 10(-15) m4/Ns respectively, and those for patellar groove cartilage to be HA = 0.47 MPa, vs = 0.24, k = 1.42 x 10(-15) m4/Ns. One surprising finding from this study is that the in situ Poisson's ratio of cartilage (0.13-0.45) may be much less than those determined from measurements performed on excised osteochondral plugs (0.40-0.49) reported in the literature. We also found the permeability of patellar groove cartilage to be several times higher than femoral condyle cartilage. These findings may have important implications on understanding the functional behavior of cartilage in situ and on methods used to determine the elastic moduli of cartilage using the indentation experiments. PMID:2613721

  7. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  8. Left Ventricular Flow Analysis: Recent Advances in Numerical Methods and Applications in Cardiac Ultrasound

    PubMed Central

    Borazjani, Iman; Westerdale, John; McMahon, Eileen M.; Rajaraman, Prathish K.; Heys, Jeffrey J.

    2013-01-01

    The left ventricle (LV) pumps oxygenated blood from the lungs to the rest of the body through systemic circulation. The efficiency of such a pumping function is dependent on blood flow within the LV chamber. It is therefore crucial to accurately characterize LV hemodynamics. Improved understanding of LV hemodynamics is expected to provide important clinical diagnostic and prognostic information. We review the recent advances in numerical and experimental methods for characterizing LV flows and focus on analysis of intraventricular flow fields by echocardiographic particle image velocimetry (echo-PIV), due to its potential for broad and practical utility. Future research directions to advance patient-specific LV simulations include development of methods capable of resolving heart valves, higher temporal resolution, automated generation of three-dimensional (3D) geometry, and incorporating actual flow measurements into the numerical solution of the 3D cardiovascular fluid dynamics. PMID:23690874

  9. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    ERIC Educational Resources Information Center

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  10. Detailed numerical simulation of shock-body interaction in 3D multicomponent flow using the RKDG numerical method and ”DiamondTorre” GPU algorithm of implementation

    NASA Astrophysics Data System (ADS)

    Korneev, Boris; Levchenko, Vadim

    2016-02-01

    Interaction between a shock wave and an inhomogeneity in fluid has complicated behavior, including vortex and turbulence generating, mixing, shock wave scattering and reflection. In the present paper we deal with the numerical simulation of the considered process. The Euler equations of unsteady inviscid compressible three-dimensional flow are used into the four-equation model of multicomponent flow. These equations are discretized using the RKDG numerical method. It is implemented with the help of the DiamondTorre algorithm, so the effective GPGPU solver is obtained having outstanding computing properties. With its use we carry out several sets of numerical experiments of shock-bubble interaction problem. The bubble deformation and mixture formation is observed.

  11. Algorithm of lithography advanced process control system for high-mix low-volume products

    NASA Astrophysics Data System (ADS)

    Kawamura, Eiichi

    2007-03-01

    We have proposed a new algorithm of Lithography Advanced Process Control System for high-mix low-volume production. This algorithm works well for 1 st lot of a new device input into the production line, or 1st lot of an existing device to be exposed with a newly introduced exposure tool. The algorithm consists of 1) searching the most suitable trend of other similar devices referring to an attribute table and a look-up table for priority of searching order, and 2) correction of differences between the two devices for deciding optimum exposure conditions. The attribute table categorizes same layers across different devices and similar layers within a device. Look-up table describes the order of searching keys. To attain cost-effective process control system, information useful to compensate referred trend is compiled into the database.

  12. Advances in Analytical and Numerical Dispersion Modeling of Pollutants Releasing from an Area-source

    NASA Astrophysics Data System (ADS)

    Nimmatoori, Praneeth

    The air quality near agricultural activities such as tilling, plowing, harvesting, and manure application is of main concern because they release fine particulate matter into the atmosphere. These releases are modeled as area-sources in the air quality modeling research. None of the currently available dispersion models relate and incorporate physical characteristics and meteorological conditions for modeling the dispersion and deposition of particulates emitting from such area-sources. This knowledge gap was addressed by developing the advanced analytical and numerical methods for modeling the dispersion of particulate matter. The development, application, and evaluation of new dispersion modeling methods are discussed in detail in this dissertation. In the analytical modeling, a ground-level area source analytical dispersion model known as particulate matter deposition -- PMD was developed for predicting the concentrations of different particle sizes. Both the particle dynamics (particle physical characteristics) and meteorological conditions which have significant effect on the dispersion of particulates were related and incorporated in the PMD model using the formulations of particle gravitational settling and dry deposition velocities. The modeled particle size concentrations of the PMD model were evaluated statistically after applying it to particulates released from a biosolid applied agricultural field. The evaluation of the PMD model using the statistical criteria concluded effective and successful inclusion of dry deposition theory for modeling particulate matter concentrations. A comprehensive review of analytical area-source dispersion models, which do not account for dry deposition and treat pollutants as gases, was conducted and determined three models -- the Shear, the Parker, and the Smith. A statistical evaluation of these dispersion models was conducted after applying them to two different field data sets and the statistical results concluded that

  13. An Adaptive Numeric Predictor-corrector Guidance Algorithm for Atmospheric Entry Vehicles. M.S. Thesis - MIT, Cambridge

    NASA Technical Reports Server (NTRS)

    Spratlin, Kenneth Milton

    1987-01-01

    An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

  14. Utilization of advanced clutter suppression algorithms for improved standoff detection and identification of radionuclide threats

    NASA Astrophysics Data System (ADS)

    Cosofret, Bogdan R.; Shokhirev, Kirill; Mulhall, Phil; Payne, David; Harris, Bernard

    2014-05-01

    Technology development efforts seek to increase the capability of detection systems in low Signal-to-Noise regimes encountered in both portal and urban detection applications. We have recently demonstrated significant performance enhancement in existing Advanced Spectroscopic Portals (ASP), Standoff Radiation Detection Systems (SORDS) and handheld isotope identifiers through the use of new advanced detection and identification algorithms. The Poisson Clutter Split (PCS) algorithm is a novel approach for radiological background estimation that improves the detection and discrimination capability of medium resolution detectors. The algorithm processes energy spectra and performs clutter suppression, yielding de-noised gamma-ray spectra that enable significant enhancements in detection and identification of low activity threats with spectral target recognition algorithms. The performance is achievable at the short integration times (0.5 - 1 second) necessary for operation in a high throughput and dynamic environment. PCS has been integrated with ASP, SORDS and RIID units and evaluated in field trials. We present a quantitative analysis of algorithm performance against data collected by a range of systems in several cluttered environments (urban and containerized) with embedded check sources. We show that the algorithm achieves a high probability of detection/identification with low false alarm rates under low SNR regimes. For example, utilizing only 4 out of 12 NaI detectors currently available within an ASP unit, PCS processing demonstrated Pd,ID > 90% at a CFAR (Constant False Alarm Rate) of 1 in 1000 occupancies against weak activity (7 - 8μCi) and shielded sources traveling through the portal at 30 mph. This vehicle speed is a factor of 6 higher than was previously possible and results in significant increase in system throughput and overall performance.

  15. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    SciTech Connect

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea; Koehler, Katrina Elizabeth; Henzl, Vladimir; Henzlova, Daniela; Parker, Robert Francis; Croft, Stephen

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  16. Numerical Study of Three-Dimensional Flows Using Unfactored Upwind-Relaxation Sweeping Algorithm

    NASA Astrophysics Data System (ADS)

    Zha, G.-C.; Bilgen, E.

    1996-05-01

    The linear stability analysis of the unfactored upwind relaxation-sweeping (URS) algorithm for 3D flow field calculations has been carried out and it is shown that the URS algorithm is unconditionally stable. The algorithm is independent of the global sweeping direction selection. However, choosing the direction with relatively low variable gradient as the global sweeping direction results in a higher degree of stability. Three-dimensional compressible Euler equations are solved by using the implicit URS algorithm to study internal flows of a non-axisymmetric nozzle with a circular-to-rectangular transition duct and complex shock wave structures for a 3D channel flow. The efficiency and robustness of the URS algorithm has been demonstrated.

  17. Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula

    2012-01-01

    AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,

  18. Numerical study of Alfvén eigenmodes in the Experimental Advanced Superconducting Tokamak

    SciTech Connect

    Hu, Youjun; Li, Guoqiang; Yang, Wenjun; Zhou, Deng; Ren, Qilong; Gorelenkov, N. N.; Cai, Huishan

    2014-05-15

    Alfvén eigenmodes in up-down asymmetric tokamak equilibria are studied by a new magnetohydrodynamic eigenvalue code. The code is verified with the NOVA code for the Solovév equilibrium and then is used to study Alfvén eigenmodes in a up-down asymmetric equilibrium of the Experimental Advanced Superconducting Tokamak. The frequency and mode structure of toroidicity-induced Alfvén eigenmodes are calculated. It is demonstrated numerically that up-down asymmetry induces phase variation in the eigenfunction across the major radius on the midplane.

  19. Simulation studies of the impact of advanced observing systems on numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Kalnay, E.; Susskind, J.; Reuter, D.; Baker, W. E.; Halem, M.

    1984-01-01

    To study the potential impact of advanced passive sounders and lidar temperature, pressure, humidity, and wind observing systems on large-scale numerical weather prediction, a series of realistic simulation studies between the European Center for medium-range weather forecasts, the National Meteorological Center, and the Goddard Laboratory for Atmospheric Sciences is conducted. The project attempts to avoid the unrealistic character of earlier simulation studies. The previous simulation studies and real-data impact tests are reviewed and the design of the current simulation system is described. Consideration is given to the simulation of observations of space-based sounding systems.

  20. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  1. Evaluation of an Area-Based matching algorithm with advanced shape models

    NASA Astrophysics Data System (ADS)

    Re, C.; Roncella, R.; Forlani, G.; Cremonese, G.; Naletto, G.

    2014-04-01

    Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications.

  2. NUMERICAL ALGORITHMS AT NON-ZERO CHEMICAL POTENTIAL. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP, VOLUME 19

    SciTech Connect

    BLUM,T.

    1999-09-14

    The RIKEN BNL Research Center hosted its 19th workshop April 27th through May 1, 1999. The topic was Numerical Algorithms at Non-Zero Chemical Potential. QCD at a non-zero chemical potential (non-zero density) poses a long-standing unsolved challenge for lattice gauge theory. Indeed, it is the primary unresolved issue in the fundamental formulation of lattice gauge theory. The chemical potential renders conventional lattice actions complex, practically excluding the usual Monte Carlo techniques which rely on a positive definite measure for the partition function. This ''sign'' problem appears in a wide range of physical systems, ranging from strongly coupled electronic systems to QCD. The lack of a viable numerical technique at non-zero density is particularly acute since new exotic ''color superconducting'' phases of quark matter have recently been predicted in model calculations. A first principles confirmation of the phase diagram is desirable since experimental verification is not expected soon. At the workshop several proposals for new algorithms were made: cluster algorithms, direct simulation of Grassman variables, and a bosonization of the fermion determinant. All generated considerable discussion and seem worthy of continued investigation. Several interesting results using conventional algorithms were also presented: condensates in four fermion models, SU(2) gauge theory in fundamental and adjoint representations, and lessons learned from strong; coupling, non-zero temperature and heavy quarks applied to non-zero density simulations.

  3. Application of two oriented partial differential equation filtering models on speckle fringes with poor quality and their numerically fast algorithms.

    PubMed

    Zhu, Xinjun; Chen, Zhanqing; Tang, Chen; Mi, Qinghua; Yan, Xiusheng

    2013-03-20

    In this paper, we are concerned with denoising in experimentally obtained electronic speckle pattern interferometry (ESPI) speckle fringe patterns with poor quality. We extend the application of two existing oriented partial differential equation (PDE) filters, including the second-order single oriented PDE filter and the double oriented PDE filter, to two experimentally obtained ESPI speckle fringe patterns with very poor quality, and compare them with other efficient filtering methods, including the adaptive weighted filter, the improved nonlinear complex diffusion PDE, and the windowed Fourier transform method. All of the five filters have been illustrated to be efficient denoising methods through previous comparative analyses in published papers. The experimental results have demonstrated that the two oriented PDE models are applicable to low-quality ESPI speckle fringe patterns. Then for solving the main shortcoming of the two oriented PDE models, we develop the numerically fast algorithms based on Gauss-Seidel strategy for the two oriented PDE models. The proposed numerical algorithms are capable of accelerating the convergence greatly, and perform significantly better in terms of computational efficiency. Our numerically fast algorithms are extended automatically to some other PDE filtering models. PMID:23518722

  4. A real-time simulation evaluation of an advanced detection, isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  5. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  6. A review on recent advances in the numerical simulation for coalbed-methane-recovery process

    SciTech Connect

    Wei, X.R.; Wang, G.X.; Massarotto, P.; Golding, S.D.; Rudolph, V.

    2007-12-15

    The recent advances in numerical simulation for primary coalbed methane (CBM) recovery and enhanced coalbed-methane recovery (ECBMR) processes are reviewed, primarily focusing on the progress that has occurred since the late 1980s. Two major issues regarding the numerical modeling will be discussed in this review: first, multicomponent gas transport in in-situ bulk coal and, second, changes of coal properties during methane (CH{sub 4}) production. For the former issues, a detailed review of more recent advances in modeling gas and water transport within a coal matrix is presented. Further, various factors influencing gas diffusion through the coal matrix will be highlighted as well, such as pore structure, concentration and pressure, and water effects. An ongoing bottleneck for evaluating total mass transport rate is developing a reasonable representation of multiscale pore space that considers coal type and rank. Moreover, few efforts have been concerned with modeling water-flow behavior in the coal matrix and its effects on CH{sub 4} production and on the exchange of carbon dioxide (CO{sub 2}) and CH{sub 4}. As for the second issue, theoretical coupled fluid-flow and geomechanical models have been proposed to describe the evolution of pore structure during CH{sub 4} production, instead of traditional empirical equations. However, there is currently no effective coupled model for engineering applications. Finally, perspectives on developing suitable simulation models for CBM production and for predicting CO{sub 2}-sequestration ECBMR are suggested.

  7. Advances in methods and algorithms in a modern quantum chemistry program package.

    PubMed

    Shao, Yihan; Molnar, Laszlo Fusti; Jung, Yousung; Kussmann, Jörg; Ochsenfeld, Christian; Brown, Shawn T; Gilbert, Andrew T B; Slipchenko, Lyudmila V; Levchenko, Sergey V; O'Neill, Darragh P; DiStasio, Robert A; Lochan, Rohini C; Wang, Tao; Beran, Gregory J O; Besley, Nicholas A; Herbert, John M; Lin, Ching Yeh; Van Voorhis, Troy; Chien, Siu Hung; Sodt, Alex; Steele, Ryan P; Rassolov, Vitaly A; Maslen, Paul E; Korambath, Prakashan P; Adamson, Ross D; Austin, Brian; Baker, Jon; Byrd, Edward F C; Dachsel, Holger; Doerksen, Robert J; Dreuw, Andreas; Dunietz, Barry D; Dutoi, Anthony D; Furlani, Thomas R; Gwaltney, Steven R; Heyden, Andreas; Hirata, So; Hsu, Chao-Ping; Kedziora, Gary; Khalliulin, Rustam Z; Klunzinger, Phil; Lee, Aaron M; Lee, Michael S; Liang, Wanzhen; Lotan, Itay; Nair, Nikhil; Peters, Baron; Proynov, Emil I; Pieniazek, Piotr A; Rhee, Young Min; Ritchie, Jim; Rosta, Edina; Sherrill, C David; Simmonett, Andrew C; Subotnik, Joseph E; Woodcock, H Lee; Zhang, Weimin; Bell, Alexis T; Chakraborty, Arup K; Chipman, Daniel M; Keil, Frerich J; Warshel, Arieh; Hehre, Warren J; Schaefer, Henry F; Kong, Jing; Krylov, Anna I; Gill, Peter M W; Head-Gordon, Martin

    2006-07-21

    Advances in theory and algorithms for electronic structure calculations must be incorporated into program packages to enable them to become routinely used by the broader chemical community. This work reviews advances made over the past five years or so that constitute the major improvements contained in a new release of the Q-Chem quantum chemistry package, together with illustrative timings and applications. Specific developments discussed include fast methods for density functional theory calculations, linear scaling evaluation of energies, NMR chemical shifts and electric properties, fast auxiliary basis function methods for correlated energies and gradients, equation-of-motion coupled cluster methods for ground and excited states, geminal wavefunctions, embedding methods and techniques for exploring potential energy surfaces. PMID:16902710

  8. An advanced numerical model for phase change problems in complicated geometries

    NASA Astrophysics Data System (ADS)

    Khashan, Saud Abdel-Aziz

    1998-11-01

    An advanced fixed-grid enthalpy formulation based finite volume numerical method is developed to solve the phase change problems in complicated geometries. The numerical method is based on a general non-orthogonal grid structure and a colocated arrangement of variables. Second order discretizations and interpolations are used. The convergence rate is considerably accelerated by switching-off the velocity in the solidified region in an implicit way. This switching-off technique has a strong compatibility with SIMPLE-like methods. For all test cases conducted in this study, the rate of convergence using the new treatment exceeds that of the other enthalpy formulation-based methods and with less numerical stability constraints, when used in convection-diffusion phase change problems. For better run in vector computers, The Incomplete LU decomposition (ILU) matrix solver is partially vectorized. The Mflops (million floating point operation per second) number is raised from 60 to over 300. Water freezing in orthogonal and non-orthogonal geometry are studied under the effect of density inversion. All thermo-physical properties of the water are dealt with as temperature-dependent (no Boussinsq approximation). The results show a profound effect of density inversion on the flow/energy field and on the local as well as on the universal freezing rate.

  9. Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm

    NASA Technical Reports Server (NTRS)

    Kato, Hiromasa; Tannehill, John C.; Mehta, Unmeel B.

    2003-01-01

    A new parabolized Navier-Stokes (PNS) algorithm has been developed to efficiently compute magnetohydrodynamic (MHD) flows in the low magnetic Reynolds number regime. In this regime, the electrical conductivity is low and the induced magnetic field is negligible compared to the applied magnetic field. The MHD effects are modeled by introducing source terms into the PNS equation which can then be solved in a very efficient manner. To account for upstream (elliptic) effects, the flowfields are computed using multiple streamwise sweeps with an iterated PNS algorithm. Turbulence has been included by modifying the Baldwin-Lomax turbulence model to account for MHD effects. The new algorithm has been used to compute both laminar and turbulent, supersonic, MHD flows over flat plates and supersonic viscous flows in a rectangular MHD accelerator. The present results are in excellent agreement with previous complete Navier-Stokes calculations.

  10. An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers

    SciTech Connect

    Balman, Mehmet; Kosar, Tevfik

    2010-05-20

    Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that are accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.

  11. Structure of the Gabor matrix and efficient numerical algorithms for discrete Gabor expansions

    NASA Astrophysics Data System (ADS)

    Qiu, Sigang; Feichtinger, Hans G.

    1994-09-01

    The standard way to obtain suitable coefficients for the (non-orthogonal) Gabor expansion of a general signal for a given Gabor atom g and a pair of lattice constants in the (discrete) time/frequency plane, requires to compute the dual Gabor window function g- first. In this paper, we present an explicit description of the sparsity, the block and banded structure of the Gabor frame matrix G. On this basis efficient algorithms are developed for computing g- by solving the linear equation g- * G equals g with the conjugate- gradients method. Using the dual Gabor wavelet, a fast Gabor reconstruction algorithm with very low computational complexity is proposed.

  12. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  13. Multicycle Optimization of Advanced Gas-Cooled Reactor Loading Patterns Using Genetic Algorithms

    SciTech Connect

    Ziver, A. Kemal; Carter, Jonathan N.; Pain, Christopher C.; Oliveira, Cassiano R.E. de; Goddard, Antony J. H.; Overton, Richard S.

    2003-02-15

    A genetic algorithm (GA)-based optimizer (GAOPT) has been developed for in-core fuel management of advanced gas-cooled reactors (AGRs) at HINKLEY B and HARTLEPOOL, which employ on-load and off-load refueling, respectively. The optimizer has been linked to the reactor analysis code PANTHER for the automated evaluation of loading patterns in a two-dimensional geometry, which is collapsed from the three-dimensional reactor model. GAOPT uses a directed stochastic (Monte Carlo) algorithm to generate initial population members, within predetermined constraints, for use in GAs, which apply the standard genetic operators: selection by tournament, crossover, and mutation. The GAOPT is able to generate and optimize loading patterns for successive reactor cycles (multicycle) within acceptable CPU times even on single-processor systems. The algorithm allows radial shuffling of fuel assemblies in a multicycle refueling optimization, which is constructed to aid long-term core management planning decisions. This paper presents the application of the GA-based optimization to two AGR stations, which apply different in-core management operational rules. Results obtained from the testing of GAOPT are discussed.

  14. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    NASA Technical Reports Server (NTRS)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  15. A numerical algorithm suggested by problems of transport in periodic media - The matrix case.

    NASA Technical Reports Server (NTRS)

    Allen, R. C., Jr.; Burgmeier, J. W.; Mundorff, P.; Wing, G. M.

    1972-01-01

    Extension of Allen and Wing's (1970) previous work on problems of transport in periodic media to the matrix case. A method in the form of a complete set of equations is presented that may be used without any further analytical work by investigators interested in computing solutions to problems of the type the method is designed to handle. All the formulas have been checked out numerically, and their effectiveness is demonstrated by numerical examples.

  16. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    NASA Astrophysics Data System (ADS)

    Bor, E.; Turduev, M.; Kurt, H.

    2016-08-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

  17. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light.

    PubMed

    Bor, E; Turduev, M; Kurt, H

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

  18. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    PubMed Central

    Bor, E.; Turduev, M.; Kurt, H.

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

  19. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-03-01

    A new algorithm, Acuros® XB Advanced Dose Calculation, has been introduced by Varian Medical Systems in the Eclipse planning system for photon dose calculation in external radiotherapy. Acuros XB is based on the solution of the linear Boltzmann transport equation (LBTE). The LBTE describes the macroscopic behaviour of radiation particles as they travel through and interact with matter. The implementation of Acuros XB in Eclipse has not been assessed; therefore, it is necessary to perform these pre-clinical validation tests to determine its accuracy. This paper summarizes the results of comparisons of Acuros XB calculations against measurements and calculations performed with a previously validated dose calculation algorithm, the Anisotropic Analytical Algorithm (AAA). The tasks addressed in this paper are limited to the fundamental characterization of Acuros XB in water for simple geometries. Validation was carried out for four different beams: 6 and 15 MV beams from a Varian Clinac 2100 iX, and 6 and 10 MV 'flattening filter free' (FFF) beams from a TrueBeam linear accelerator. The TrueBeam FFF are new beams recently introduced in clinical practice on general purpose linear accelerators and have not been previously reported on. Results indicate that Acuros XB accurately reproduces measured and calculated (with AAA) data and only small deviations were observed for all the investigated quantities. In general, the overall degree of accuracy for Acuros XB in simple geometries can be stated to be within 1% for open beams and within 2% for mechanical wedges. The basic validation of the Acuros XB algorithm was therefore considered satisfactory for both conventional photon beams as well as for FFF beams of new generation linacs such as the Varian TrueBeam.

  20. Numerical approach for the voloxidation process of an advanced spent fuel conditioning process (ACP)

    SciTech Connect

    Park, Byung Heung; Jeong, Sang Mun; Seo, Chung-Seok

    2007-07-01

    A voloxidation process is adopted as the first step of an advanced spent fuel conditioning process in order to prepare the SF oxide to be reduced in the following electrolytic reduction process. A semi-batch type voloxidizer was devised to transform a SF pellet into powder. In this work, a simple reactor model was developed for the purpose of correlating a gas phase flow rate with an operation time as a numerical approach. With an assumption that a solid phase and a gas phase are homogeneous in a reactor, a reaction rate for an oxidation was introduced into a mass balance equation. The developed equation can describe a change of an outlet's oxygen concentration including such a case that a gas flow is not sufficient enough to continue a reaction at its maximum reaction rate. (authors)

  1. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  2. Finite-element algorithm for radiative transfer in vertically inhomogeneous media: numerical scheme and applications.

    PubMed

    Kisselev, V B; Roberti, L; Perona, G

    1995-12-20

    The recently developed finite-element method for solution of the radiative transfer equation has been extended to compute the full azimuthal dependence of the radiance in a vertically inhomogeneous plane-parallel medium. The physical processes that are included in the algorithm are multiple scattering and bottom boundary bidirectional reflectivity. The incident radiation is a parallel flux on the top boundary that is characteristic for illumination of the atmosphere by the Sun in the UV, visible, and near-infrared regions of the electromagnetic spectrum. The theoretical basis is presented together with a number of applications to realistic atmospheres. The method is shown to be accurate even with a low number of grid points for most of the considered situations. The FORTRAN code for this algorithm is developed and is available for applications. PMID:21068966

  3. Finite-element algorithm for radiative transfer in vertically inhomogeneous media: numerical scheme and applications

    NASA Astrophysics Data System (ADS)

    Kisselev, Viatcheslav B.; Roberti, Laura; Perona, Giovanni

    1995-12-01

    The recently developed finite-element method for solution of the radiative transfer equation has been extended to compute the full azimuthal dependence of the radiance in a vertically inhomogeneous plane-parallel medium. The physical processes that are included in the algorithm are multiple scattering and bottom boundary bidirectional reflectivity. The incident radiation is a parallel flux on the top boundary that is characteristic for illumination of the atmosphere by the Sun in the UV, visible, and near-infrared regions of the electromagnetic spectrum. The theoretical basis is presented together with a number of applications to realistic atmospheres. The method is shown to be accurate even with a low number of grid points for most of the considered situations. The fortran code for this algorithm is developed and is available for applications.

  4. Parallel technology for numerical modeling of fluid dynamics problems by high-accuracy algorithms

    NASA Astrophysics Data System (ADS)

    Gorobets, A. V.

    2015-04-01

    A parallel computation technology for modeling fluid dynamics problems by finite-volume and finite-difference methods of high accuracy is presented. The development of an algorithm, the design of a software implementation, and the creation of parallel programs for computations on large-scale computing systems are considered. The presented parallel technology is based on a multilevel parallel model combining various types of parallelism: with shared and distributed memory and with multiple and single instruction streams to multiple data flows.

  5. Numerical solutions of the reaction diffusion system by using exponential cubic B-spline collocation algorithms

    NASA Astrophysics Data System (ADS)

    Ersoy, Ozlem; Dag, Idris

    2015-12-01

    The solutions of the reaction-diffusion system are given by method of collocation based on the exponential B-splines. Thus the reaction-diffusion systemturns into an iterative banded algebraic matrix equation. Solution of the matrix equation is carried out byway of Thomas algorithm. The present methods test on both linear and nonlinear problems. The results are documented to compare with some earlier studies by use of L∞ and relative error norm for problems respectively.

  6. Scanning of wind turbine upwind conditions: numerical algorithm and first applications

    NASA Astrophysics Data System (ADS)

    Calaf, Marc; Cortina, Gerard; Sharma, Varun; Parlange, Marc B.

    2014-11-01

    Wind turbines still obtain in-situ meteorological information by means of traditional wind vane and cup anemometers installed at the turbine's nacelle, right behind the blades. This has two important drawbacks: 1-turbine misalignment with the mean wind direction is common and energy losses are experienced; 2-the near-blade monitoring does not provide any time to readjust the profile of the wind turbine to incoming turbulence gusts. A solution is to install wind Lidar devices on the turbine's nacelle. This technique is currently under development as an alternative to traditional in-situ wind anemometry because it can measure the wind vector at substantial distances upwind. However, at what upwind distance should they interrogate the atmosphere? A new flexible wind turbine algorithm for large eddy simulations of wind farms that allows answering this question, will be presented. The new wind turbine algorithm timely corrects the turbines' yaw misalignment with the changing wind. The upwind scanning flexibility of the algorithm also allows to track the wind vector and turbulent kinetic energy as they approach the wind turbine's rotor blades. Results will illustrate the spatiotemporal evolution of the wind vector and the turbulent kinetic energy as the incoming flow approaches the wind turbine under different atmospheric stability conditions. Results will also show that the available atmospheric wind power is larger during daytime periods at the cost of an increased variance.

  7. The role of numerical simulation for the development of an advanced HIFU system

    NASA Astrophysics Data System (ADS)

    Okita, Kohei; Narumi, Ryuta; Azuma, Takashi; Takagi, Shu; Matumoto, Yoichiro

    2014-10-01

    High-intensity focused ultrasound (HIFU) has been used clinically and is under clinical trials to treat various diseases. An advanced HIFU system employs ultrasound techniques for guidance during HIFU treatment instead of magnetic resonance imaging in current HIFU systems. A HIFU beam imaging for monitoring the HIFU beam and a localized motion imaging for treatment validation of tissue are introduced briefly as the real-time ultrasound monitoring techniques. Numerical simulations have a great impact on the development of real-time ultrasound monitoring as well as the improvement of the safety and efficacy of treatment in advanced HIFU systems. A HIFU simulator was developed to reproduce ultrasound propagation through the body in consideration of the elasticity of tissue, and was validated by comparison with in vitro experiments in which the ultrasound emitted from the phased-array transducer propagates through the acrylic plate acting as a bone phantom. As the result, the defocus and distortion of the ultrasound propagating through the acrylic plate in the simulation quantitatively agree with that in the experimental results. Therefore, the HIFU simulator accurately reproduces the ultrasound propagation through the medium whose shape and physical properties are well known. In addition, it is experimentally confirmed that simulation-assisted focus control of the phased-array transducer enables efficient assignment of the focus to the target. Simulation-assisted focus control can contribute to design of transducers and treatment planning.

  8. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  9. On the modeling of equilibrium twin interfaces in a single-crystalline magnetic shape memory alloy sample. II: numerical algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Jiong; Steinmann, Paul

    2016-05-01

    This is part II of this series of papers. The aim of the current paper was to solve the governing PDE system derived in part I numerically, such that the procedure of variant reorientation in a magnetic shape memory alloy (MSMA) sample can be simulated. The sample to be considered in this paper has a 3D cuboid shape and is subject to typical magnetic and mechanical loading conditions. To investigate the demagnetization effect on the sample's response, the surrounding space of the sample is taken into account. By considering the different properties of the independent variables, an iterative numerical algorithm is proposed to solve the governing system. The related mathematical formulas and some techniques facilitating the numerical calculations are introduced. Based on the results of numerical simulations, the distributions of some important physical quantities (e.g., magnetization, demagnetization field, and mechanical stress) in the sample can be determined. Furthermore, the properties of configurational force on the twin interfaces are investigated. By virtue of the twin interface movement criteria derived in part I, the whole procedure of magnetic field- or stress-induced variant reorientations in the MSMA sample can be properly simulated.

  10. A novel feedback algorithm for simulating controlled dynamics and confinement in the advanced reversed-field pinch

    SciTech Connect

    Dahlin, J.-E.; Scheffel, J.

    2005-06-15

    In the advanced reversed-field pinch (RFP), the current density profile is externally controlled to diminish tearing instabilities. Thus the scaling of energy confinement time with plasma current and density is improved substantially as compared to the conventional RFP. This may be numerically simulated by introducing an ad hoc electric field, adjusted to generate a tearing mode stable parallel current density profile. In the present work a current profile control algorithm, based on feedback of the fluctuating electric field in Ohm's law, is introduced into the resistive magnetohydrodynamic code DEBSP [D. D. Schnack and D. C. Baxter, J. Comput. Phys. 55, 485 (1984); D. D. Schnack, D. C. Barnes, Z. Mikic, D. S. Marneal, E. J. Caramana, and R. A. Nebel, Comput. Phys. Commun. 43, 17 (1986)]. The resulting radial magnetic field is decreased considerably, causing an increase in energy confinement time and poloidal {beta}. It is found that the parallel current density profile spontaneously becomes hollow, and that a formation, being related to persisting resistive g modes, appears close to the reversal surface.

  11. Advanced Oil Spill Detection Algorithms For Satellite Based Maritime Environment Monitoring

    NASA Astrophysics Data System (ADS)

    Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo

    2013-12-01

    During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST

  12. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  13. An Effective Hybrid Firefly Algorithm with Harmony Search for Global Numerical Optimization

    PubMed Central

    Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan

    2013-01-01

    A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods. PMID:24348137

  14. On substructuring algorithms and solution techniques for the numerical approximation of partial differential equations

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.; Nicolaides, R. A.

    1986-01-01

    Substructuring methods are in common use in mechanics problems where typically the associated linear systems of algebraic equations are positive definite. Here these methods are extended to problems which lead to nonpositive definite, nonsymmetric matrices. The extension is based on an algorithm which carries out the block Gauss elimination procedure without the need for interchanges even when a pivot matrix is singular. Examples are provided wherein the method is used in connection with finite element solutions of the stationary Stokes equations and the Helmholtz equation, and dual methods for second-order elliptic equations.

  15. Coupled of thermal-mechanical-transformation numerical simulation on hot stamping with static explicit algorithm

    NASA Astrophysics Data System (ADS)

    Hu, P.; Shi, D. Y.; Ying, L.; Shen, G. Z.; Chang, Y.; Liu, W. Q.

    2013-05-01

    Thermal-mechanical-transformation coupled theoretical model for hot stamping and rheological behavior of high strength steel at elevated temperatures were obtained through non-isothermal and isothermal tensile tests respectively in this work. The static explicit finite element equations for hot stamping were proposed based on thermal-mechanical-transformation coupled constitutive laws and nonlinear, large deformation analysis. According to these equations, the hot stamping module of KMAS (King Mesh Analysis System) was developed for the numerical simulation of sheet metal forming at elevated temperatures. Afterwards, the hot stamping simulation of a typical B-pillar conducted by the KMAS software was compared to the experiment. The comparison consists of the following sides: temperature distribution, thickness distribution and martensite fraction. The good agreement between numerical simulation and the experiment confirms that the multi-field coupled constitutive laws and the KMAS software can predict hot stamping process accurately.

  16. X-Ray Dose Reduction in Abdominal Computed Tomography Using Advanced Iterative Reconstruction Algorithms

    PubMed Central

    Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua

    2014-01-01

    Objective This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. Methods CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. Results At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Conclusions Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively. PMID:24664174

  17. World Wide Web interface for advanced SPECT reconstruction algorithms implemented on a remote massively parallel computer.

    PubMed

    Formiconi, A R; Passeri, A; Guelfi, M R; Masoni, M; Pupi, A; Meldolesi, U; Malfetti, P; Calori, L; Guidazzoli, A

    1997-11-01

    Data from Single Photon Emission Computed Tomography (SPECT) studies are blurred by inevitable physical phenomena occurring during data acquisition. These errors may be compensated by means of reconstruction algorithms which take into account accurate physical models of the data acquisition procedure. Unfortunately, this approach involves high memory requirements as well as a high computational burden which cannot be afforded by the computer systems of SPECT acquisition devices. In this work the possibility of accessing High Performance Computing and Networking (HPCN) resources through a World Wide Web interface for the advanced reconstruction of SPECT data in a clinical environment was investigated. An iterative algorithm with an accurate model of the variable system response was ported on the Multiple Instruction Multiple Data (MIMD) parallel architecture of a Cray T3D massively parallel computer. The system was accessible even from low cost PC-based workstations through standard TCP/IP networking. A speedup factor of 148 was predicted by the benchmarks run on the Cray T3D. A complete brain study of 30 (64 x 64) slices was reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s which corresponds to an actual speed-up factor of 135. The technique was extended to a more accurate 3D modeling of the system response for a true 3D reconstruction of SPECT data; the reconstruction time of the same data set with this more accurate model was 5 min. This work demonstrates the possibility of exploiting remote HPCN resources from hospital sites by means of low cost workstations using standard communication protocols and an user-friendly WWW interface without particular problems for routine use. PMID:9506406

  18. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  19. Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer - I. Algorithms and numerical methods

    NASA Astrophysics Data System (ADS)

    Harries, Tim J.

    2015-04-01

    We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelization method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion on to, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms.

  20. A block matching-based registration algorithm for localization of locally advanced lung tumors

    PubMed Central

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2014-01-01

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm3), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%; p < 0.001). Left

  1. A block matching-based registration algorithm for localization of locally advanced lung tumors

    SciTech Connect

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2014-04-15

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm{sup 3}), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0

  2. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  3. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Astrophysics Data System (ADS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  4. Full-scale engine demonstration of an advanced sensor failure detection isolation, and accommodation algorithm - Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  5. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-05-01

    This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was

  6. Numerical simulation of steady and unsteady viscous flow in turbomachinery using pressure based algorithm

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, B.; Ho, Y.; Basson, A.

    1993-07-01

    The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake

  7. Numerical simulation of steady and unsteady viscous flow in turbomachinery using pressure based algorithm

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Ho, Y.; Basson, A.

    1993-01-01

    The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake

  8. Numerical solution of the Richards equation based catchment runoff model with dd-adaptivity algorithm

    NASA Astrophysics Data System (ADS)

    Kuraz, Michal

    2016-06-01

    This paper presents pseudo-deterministic catchment runoff model based on the Richards equation model [1] - the governing equation for the subsurface flow. The subsurface flow in a catchment is described here by two-dimensional variably saturated flow (unsaturated and saturated). The governing equation is the Richards equation with a slight modification of the time derivative term as considered e.g. by Neuman [2]. The nonlinear nature of this problem appears in unsaturated zone only, however the delineation of the saturated zone boundary is a nonlinear computationally expensive issue. The simple one-dimensional Boussinesq equation was used here as a rough estimator of the saturated zone boundary. With this estimate the dd-adaptivity algorithm (see Kuraz et al. [4, 5, 6]) could always start with an optimal subdomain split, so it is now possible to avoid solutions of huge systems of linear equations in the initial iteration level of our Richards equation based runoff model.

  9. An efficient algorithm for numerical computations of continuous densities of states

    NASA Astrophysics Data System (ADS)

    Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.

    2016-06-01

    In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed

  10. Numerical Evaluation of Fluid Mixing Phenomena in Boiling Water Reactor Using Advanced Interface Tracking Method

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Takase, Kazuyuki

    Thermal-hydraulic design of the current boiling water reactor (BWR) is performed with the subchannel analysis codes which incorporated the correlations based on empirical results including actual-size tests. Then, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. In this situation, development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason, we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, a detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. In this paper, firstly, we tried to verify the TPFIT code by comparing it with the existing 2-channel air-water mixing experimental results. Secondary, the TPFIT code was applied to simulation of steam-water two-phase flow in a model of two subchannels of a current BWRs and FLWRs rod bundle. The fluid mixing was observed at a gap between the subchannels. The existing two-phase flow correlation for fluid mixing is evaluated using detailed numerical simulation data. This data indicates that pressure difference between fluid channels is responsible for the fluid mixing, and thus the effects of the time average pressure difference and fluctuations must be incorporated in the two-phase flow correlation for fluid mixing. When inlet quality ratio of subchannels is relatively large, it is understood that evaluation precision of the existing two-phase flow correlations for fluid mixing are relatively low.

  11. In-Situ Assays Using a New Advanced Mathematical Algorithm - 12400

    SciTech Connect

    Oginni, B.M.; Bronson, F.L.; Field, M.B.; Lamontagne, J.; LeBlanc, P.J.; Morris, K.E.; Mueller, W.F.; Atrashkevich, V.

    2012-07-01

    Current mathematical efficiency modeling software for in-situ counting, such as the commercially available In-Situ Object Calibration Software (ISOCS), typically allows the description of measurement geometries via a list of well-defined templates which describe regular objects, such as boxes, cylinder, or spheres. While for many situations, these regular objects are sufficient to describe the measurement conditions, there are occasions in which a more detailed model is desired. We have developed a new all-purpose geometry template that can extend the flexibility of current ISOCS templates. This new template still utilizes the same advanced mathematical algorithms as current templates, but allows the extension to a multitude of shapes and objects that can be placed at any location and even combined. In addition, detectors can be placed anywhere and aimed at any location within the measurement scene. Several applications of this algorithm to in-situ waste assay measurements, as well as, validations of this template using Monte Carlo calculations and experimental measurements are studied. Presented in this paper is a new template of the mathematical algorithms for evaluating efficiencies. This new template combines all the advantages of the ISOCS and it allows the use of very complex geometries, it also allows stacking of geometries on one another in the same measurement scene and it allows the detector to be placed anywhere in the measurement scene and pointing in any direction. We have shown that the template compares well with the previous ISOCS software within the limit of convergence of the code, and also compare well with the MCNPX and measured data within the joint uncertainties for the code and the data. The new template agrees with ISOCS to within 1.5% at all energies. It agrees with the MCNPX to within 10% at all energies and it agrees with most geometries within 5%. It finally agrees with measured data to within 10%. This mathematical algorithm can now be

  12. A fast algorithm for Direct Numerical Simulation of natural convection flows in arbitrarily-shaped periodic domains

    NASA Astrophysics Data System (ADS)

    Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.

    2015-11-01

    A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.

  13. Survey and Implementation on DSP of Algorithms of Robot Paths Generation and of Numeric Control for Mobile Robot

    NASA Astrophysics Data System (ADS)

    Bouallegue, Kais; Chaari, Abdessattar

    In this study, one propose to study a numeric type strategy permitting the generation of any shape of path in view of the scheduling of the trajectories for a car-like mobile robot where the planned motions considered are continuous sequences in the space of the robot. These paths are programmed in order to have some types of closed or open trajectories. One is interested in the motion control of the robot from an initial position to a final position while optimizing the consumed energy in its alternated circular motion on both sides of the segment joining these two points. In this study, one presents a new method based on a numeric approach conceived from the kinematics equations of the robot. This new technique of numeric, adaptive and dynamic control of the robot is implemented on DSP21065L of the SHARC family. This algorithm assures the robot control of an initial position of departure to a final position of arrival without the existence of obstacles.

  14. Advances in numerical solutions to integral equations in liquid state theory

    NASA Astrophysics Data System (ADS)

    Howard, Jesse J.

    Solvent effects play a vital role in the accurate description of the free energy profile for solution phase chemical and structural processes. The inclusion of solvent effects in any meaningful theoretical model however, has proven to be a formidable task. Generally, methods involving Poisson-Boltzmann (PB) theory and molecular dynamic (MD) simulations are used, but they either fail to accurately describe the solvent effects or require an exhaustive computation effort to overcome sampling problems. An alternative to these methods are the integral equations (IEs) of liquid state theory which have become more widely applicable due to recent advancements in the theory of interaction site fluids and the numerical methods to solve the equations. In this work a new numerical method is developed based on a Newton-type scheme coupled with Picard/MDIIS routines. To extend the range of these numerical methods to large-scale data systems, the size of the Jacobian is reduced using basis functions, and the Newton steps are calculated using a GMRes solver. The method is then applied to calculate solutions to the 3D reference interaction site model (RISM) IEs of statistical mechanics, which are derived from first principles, for a solute model of a pair of parallel graphene plates at various separations in pure water. The 3D IEs are then extended to electrostatic models using an exact treatment of the long-range Coulomb interactions for negatively charged walls and DNA duplexes in aqueous electrolyte solutions to calculate the density profiles and solution thermodynamics. It is found that the 3D-IEs provide a qualitative description of the density distributions of the solvent species when compared to MD results, but at a much reduced computational effort in comparison to MD simulations. The thermodynamics of the solvated systems are also qualitatively reproduced by the IE results. The findings of this work show the IEs to be a valuable tool for the study and prediction of

  15. A critical evaluation of numerical algorithms and flow physics in complex supersonic flows

    NASA Astrophysics Data System (ADS)

    Aradag, Selin

    In this research, two different complex supersonic flows are selected to apply CFD to Navier-Stokes simulations. First test case is "Supersonic Flow over an Open Rectangular Cavity". Open cavity flow fields are remarkably complicated with internal and external regions that are coupled via self-sustained shear layer oscillations. Supersonic flow past a cavity has numerous applications in store carriage and release. Internal carriage of stores, which can be modeled using a cavity configuration, is used for supersonic aircraft in order to reduce radar cross section, aerodynamic drag and aerodynamic heating. Supersonic, turbulent, three-dimensional unsteady flow past an open rectangular cavity is simulated, to understand the physics and three-dimensional nature of the cavity flow oscillations. Influences of numerical parameters such as numerical flux scheme, computation time and flux limiter on the computed flow are determined. Two dimensional simulations are also performed for comparison purposes. The next test case is "The Computational Design of Boeing/AFOSR Mach 6 Wind Tunnel". Due to huge differences between geometrical scales, this problem is both challenging and computationally intensive. It is believed that most of the experimental data obtained from conventional ground testing facilities are not reliable due to high levels of noise associated with the acoustic fluctuations from the turbulent boundary layers on the wind tunnel walls. Therefore, it is very important to have quiet testing facilities for hypersonic flow research. The Boeing/AFOSR Mach 6 Wind tunnel in Purdue University has been designed as a quiet tunnel for which the noise level is an order of magnitude lower than that in conventional wind tunnels. However, quiet flow is achieved in the Purdue Mach 6 tunnel for only low Reynolds numbers. Early transition of the nozzle wall boundary layer has been identified as the cause of the test section noise. Separation bubbles on the bleed lip and associated

  16. A hybrid numerical technique for predicting the aerodynamic and acoustic fields of advanced turboprops

    NASA Technical Reports Server (NTRS)

    Homicz, G. F.; Moselle, J. R.

    1985-01-01

    A hybrid numerical procedure is presented for the prediction of the aerodynamic and acoustic performance of advanced turboprops. A hybrid scheme is proposed which in principle leads to a consistent simultaneous prediction of both fields. In the inner flow a finite difference method, the Approximate-Factorization Alternating-Direction-Implicit (ADI) scheme, is used to solve the nonlinear Euler equations. In the outer flow the linearized acoustic equations are solved via a Boundary-Integral Equation (BIE) method. The two solutions are iteratively matched across a fictitious interface in the flow so as to maintain continuity. At convergence the resulting aerodynamic load prediction will automatically satisfy the appropriate free-field boundary conditions at the edge of the finite difference grid, while the acoustic predictions will reflect the back-reaction of the radiated field on the magnitude of the loading source terms, as well as refractive effects in the inner flow. The equations and logic needed to match the two solutions are developed and the computer program implementing the procedure is described. Unfortunately, no converged solutions were obtained, due to unexpectedly large running times. The reasons for this are discussed and several means to alleviate the situation are suggested.

  17. A Nested Genetic Algorithm for the Numerical Solution of Non-Linear Coupled Equations in Water Quality Modeling

    NASA Astrophysics Data System (ADS)

    García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson

    2010-05-01

    Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in

  18. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-10-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  19. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-09-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  20. Fast algorithms for transport models

    SciTech Connect

    Manteuffel, T.A.

    1992-12-01

    The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).

  1. Real-space, mean-field algorithm to numerically calculate long-range interactions

    NASA Astrophysics Data System (ADS)

    Cadilhe, A.; Costa, B. V.

    2016-02-01

    Long-range interactions are known to be of difficult treatment in statistical mechanics models. There are some approaches that introduce a cutoff in the interactions or make use of reaction field approaches. However, those treatments suffer the illness of being of limited use, in particular close to phase transitions. The use of open boundary conditions allows the sum of the long-range interactions over the entire system to be done, however, this approach demands a sum over all degrees of freedom in the system, which makes a numerical treatment prohibitive. Techniques like the Ewald summation or fast multipole expansion account for the exact interactions but are still limited to a few thousands of particles. In this paper we introduce a novel mean-field approach to treat long-range interactions. The method is based in the division of the system in cells. In the inner cell, that contains the particle in sight, the 'local' interactions are computed exactly, the 'far' contributions are then computed as the average over the particles inside a given cell with the particle in sight for each of the remaining cells. Using this approach, the large and small cells limits are exact. At a fixed cell size, the method also becomes exact in the limit of large lattices. We have applied the procedure to the two-dimensional anisotropic dipolar Heisenberg model. A detailed comparison between our method, the exact calculation and the cutoff radius approximation were done. Our results show that the cutoff-cell approach outperforms any cutoff radius approach as it maintains the long-range memory present in these interactions, contrary to the cutoff radius approximation. Besides that, we calculated the critical temperature and the critical behavior of the specific heat of the anisotropic Heisenberg model using our method. The results are in excellent agreement with extensive Monte Carlo simulations using Ewald summation.

  2. Recent advances in theoretical and numerical studies of wire array Z-pinch in the IAPCM

    SciTech Connect

    Ding, Ning Zhang, Yang Xiao, Delong Wu, Jiming Huang, Jun Yin, Li Sun, Shunkai Xue, Chuang Dai, Zihuan Ning, Cheng Shu, Xiaojian Wang, Jianguo Li, Hua

    2014-12-15

    Fast Z-pinch has produced the most powerful X-ray radiation source in laboratory and also shows the possibility to drive inertial confinement fusion (ICF). Recent advances in wire-array Z-pinch researches at the Institute of Applied Physics and Computational Mathematics are presented in this paper. A typical wire array Z-pinch process has three phases: wire plasma formation and ablation, implosion and the MRT instability development, stagnation and radiation. A mass injection model with azimuthal modulation coefficient is used to describe the wire initiation, and the dynamics of ablated plasmas of wire-array Z-pinches in (r, θ) geometry is numerically studied. In the implosion phase, a two-dimensional(r, z) three temperature radiation MHD code MARED has been developed to investigate the development of the Magneto-Rayleigh-Taylor(MRT) instability. We also analyze the implosion modes of nested wire-array and find that the inner wire-array is hardly affected before the impaction of the outer wire-array. While the plasma accelerated to high speed in the implosion stage stagnates on the axis, abundant x-ray radiation is produced. The energy spectrum of the radiation and the production mechanism are investigated. The computational x-ray pulse shows a reasonable agreement with the experimental result. We also suggest that using alloyed wire-arrays can increase multi-keV K-shell yield by decreasing the opacity of K-shell lines. In addition, we use a detailed circuit model to study the energy coupling between the generator and the Z-pinch implosion. Recently, we are concentrating on the problems of Z-pinch driven ICF, such as dynamic hohlraum and capsule implosions. Our numerical investigations on the interaction of wire-array Z-pinches on foam convertors show qualitative agreements with experimental results on the “Qiangguang I” facility. An integrated two-dimensional simulation of dynamic hohlraum driven capsule implosion provides us the physical insights of wire

  3. Recent advances in theoretical and numerical studies of wire array Z-pinch in the IAPCM

    NASA Astrophysics Data System (ADS)

    Ding, Ning; Zhang, Yang; Xiao, Delong; Wu, Jiming; Huang, Jun; Yin, Li; Sun, Shunkai; Xue, Chuang; Dai, Zihuan; Ning, Cheng; Shu, Xiaojian; Wang, Jianguo; Li, Hua

    2014-12-01

    Fast Z-pinch has produced the most powerful X-ray radiation source in laboratory and also shows the possibility to drive inertial confinement fusion (ICF). Recent advances in wire-array Z-pinch researches at the Institute of Applied Physics and Computational Mathematics are presented in this paper. A typical wire array Z-pinch process has three phases: wire plasma formation and ablation, implosion and the MRT instability development, stagnation and radiation. A mass injection model with azimuthal modulation coefficient is used to describe the wire initiation, and the dynamics of ablated plasmas of wire-array Z-pinches in (r, θ) geometry is numerically studied. In the implosion phase, a two-dimensional(r, z) three temperature radiation MHD code MARED has been developed to investigate the development of the Magneto-Rayleigh-Taylor(MRT) instability. We also analyze the implosion modes of nested wire-array and find that the inner wire-array is hardly affected before the impaction of the outer wire-array. While the plasma accelerated to high speed in the implosion stage stagnates on the axis, abundant x-ray radiation is produced. The energy spectrum of the radiation and the production mechanism are investigated. The computational x-ray pulse shows a reasonable agreement with the experimental result. We also suggest that using alloyed wire-arrays can increase multi-keV K-shell yield by decreasing the opacity of K-shell lines. In addition, we use a detailed circuit model to study the energy coupling between the generator and the Z-pinch implosion. Recently, we are concentrating on the problems of Z-pinch driven ICF, such as dynamic hohlraum and capsule implosions. Our numerical investigations on the interaction of wire-array Z-pinches on foam convertors show qualitative agreements with experimental results on the "Qiangguang I" facility. An integrated two-dimensional simulation of dynamic hohlraum driven capsule implosion provides us the physical insights of wire

  4. Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, numerics and applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George Em

    2014-11-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been

  5. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost

  6. Advanced signal separation and recovery algorithms for digital x-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Mahmoud, Imbaby I.; El Tokhy, Mohamed S.

    2015-02-01

    X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and

  7. Thin film subsurface environments; Advanced X-ray spectroscopies and a novel Bayesian inference modeling algorithm

    NASA Astrophysics Data System (ADS)

    Church, Jonathan R.

    New condensed matter metrologies are being used to probe ever smaller length scales. In support of the diverse field of materials research synchrotron based spectroscopies provide sub-micron spatial resolutions and a breadth of photon wavelengths for scientific studies. For electronic materials the thinnest layers in a complementary metal-oxide-semiconductor (CMOS) device have been reduced to just a few nanometers. This raises concerns for layer uniformity, complete surface coverage, and interfacial quality. Deposition processes like chemical vapor deposition (CVD) and atomic layer deposition (ALD) have been shown to deposit the needed high-quality films for the requisite thicknesses. However, new materials beget new chemistries and, unfortunately, unwanted side-reactions and by-products. CVD/ALD tools and chemical precursors provided by our collaborators at Air Liquide utilized these new chemistries and films were deposited for which novel spectroscopic characterization methods were used. The second portion of the thesis focuses on fading and decomposing paint pigments in iconic artworks. Efforts have been directed towards understanding the micro-environments causing degradation. Hard X-ray photoelectron spectroscopy (HAXPES) and variable kinetic energy X-ray photoelectron spectroscopy (VKE-XPS) are advanced XPS techniques capable of elucidating both chemical environments and electronic band structures in sub-surface regions of electronic materials. HAXPES has been used to study the electronic band structure in a typical CMOS structure; it will be shown that unexpected band alignments are associated with the presence of electronic charges near a buried interface. Additionally, a computational modeling algorithm, Bayes-Sim, was developed to reconstruct compositional depth profiles (CDP) using VKE-XPS data sets; a subset algorithm also reconstructs CDP from angle-resolved XPS data. Reconstructed CDP produced by Bayes-Sim were most strongly correlated to the real

  8. Clinical accuracy of a continuous glucose monitoring system with an advanced algorithm.

    PubMed

    Bailey, Timothy S; Chang, Anna; Christiansen, Mark

    2015-03-01

    We assessed the performance of a modified Dexcom G4 Platinum system with an advanced algorithm, in comparison with frequent venous samples measured on a laboratory reference (YSI) during a clinic session and in comparison to self-monitored blood glucose (SMBG) during home use. Fifty-one subjects with diabetes were enrolled in a prospective multicenter study. Subjects wore 1 sensor for 7-day use and participated in one 12-hour in-clinic session on day 1, 4, or 7 to collect YSI reference venous glucose every 15 minutes and capillary SMBG test every 30 minutes. Carbohydrate consumption and insulin dosing and timing were manipulated to obtain data in low and high glucose ranges. In comparison with the laboratory reference method (n = 2,263) the system provided a mean and median absolute relative differences (ARD) of 9.0% and 7.0%, respectively. The mean absolute difference for CGM was 6.4 mg/dL when the YSIs were within hypoglycemia ranges (≤ 70 mg/dL). The percentage in the clinically accurate Clarke error grid A zone was 92.4% and in the benign error B zone was 7.1%. Majority of the sensors (73%) had an aggregated MARD in reference to YSI ≤ 10%. The MARD of CGM-SMBG for home use was 11.3%. The study showed that the point and rate accuracy, clinical accuracy, reliability, and consistency over the duration of wear and across glycemic ranges were superior to current commercial real-time CGM systems. The performance of this CGM is reaching that of a self-monitoring blood glucose meter in real use environment. PMID:25370149

  9. Advanced material modelling in numerical simulation of primary acetabular press-fit cup stability.

    PubMed

    Souffrant, R; Zietz, C; Fritsche, A; Kluess, D; Mittelmeier, W; Bader, R

    2012-01-01

    Primary stability of artificial acetabular cups, used for total hip arthroplasty, is required for the subsequent osteointegration and good long-term clinical results of the implant. Although closed-cell polymer foams represent an adequate bone substitute in experimental studies investigating primary stability, correct numerical modelling of this material depends on the parameter selection. Material parameters necessary for crushable foam plasticity behaviour were originated from numerical simulations matched with experimental tests of the polymethacrylimide raw material. Experimental primary stability tests of acetabular press-fit cups consisting of static shell assembly with consecutively pull-out and lever-out testing were subsequently simulated using finite element analysis. Identified and optimised parameters allowed the accurate numerical reproduction of the raw material tests. Correlation between experimental tests and the numerical simulation of primary implant stability depended on the value of interference fit. However, the validated material model provides the opportunity for subsequent parametric numerical studies. PMID:22817471

  10. Multi-objective optimization of combined Brayton and inverse Brayton cycles using advanced optimization algorithms

    NASA Astrophysics Data System (ADS)

    Venkata Rao, R.; Patel, Vivek

    2012-08-01

    This study explores the use of teaching-learning-based optimization (TLBO) and artificial bee colony (ABC) algorithms for determining the optimum operating conditions of combined Brayton and inverse Brayton cycles. Maximization of thermal efficiency and specific work of the system are considered as the objective functions and are treated simultaneously for multi-objective optimization. Upper cycle pressure ratio and bottom cycle expansion pressure of the system are considered as design variables for the multi-objective optimization. An application example is presented to demonstrate the effectiveness and accuracy of the proposed algorithms. The results of optimization using the proposed algorithms are validated by comparing with those obtained by using the genetic algorithm (GA) and particle swarm optimization (PSO) on the same example. Improvement in the results is obtained by the proposed algorithms. The results of effect of variation of the algorithm parameters on the convergence and fitness values of the objective functions are reported.

  11. Free Radical Addition Polymerization Kinetics without Steady-State Approximations: A Numerical Analysis for the Polymer, Physical, or Advanced Organic Chemistry Course

    ERIC Educational Resources Information Center

    Iler, H. Darrell; Brown, Amber; Landis, Amanda; Schimke, Greg; Peters, George

    2014-01-01

    A numerical analysis of the free radical addition polymerization system is described that provides those teaching polymer, physical, or advanced organic chemistry courses the opportunity to introduce students to numerical methods in the context of a simple but mathematically stiff chemical kinetic system. Numerical analysis can lead students to an…

  12. Advanced Algorithms and Automation Tools for Discrete Ordinates Methods in Parallel Environments

    SciTech Connect

    Alireza Haghighat

    2003-05-07

    This final report discusses major accomplishments of a 3-year project under the DOE's NEER Program. The project has developed innovative and automated algorithms, codes, and tools for solving the discrete ordinates particle transport method efficiently in parallel environments. Using a number of benchmark and real-life problems, the performance and accuracy of the new algorithms have been measured and analyzed.

  13. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1983-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.

  14. Advanced numerical methods for three dimensional two-phase flow calculations

    SciTech Connect

    Toumi, I.; Caruge, D.

    1997-07-01

    This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses an extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.

  15. Simulation of rice plant temperatures using the UC Davis Advanced Canopy-Atmosphere-Soil Algorithm (ACASA)

    NASA Astrophysics Data System (ADS)

    Maruyama, A.; Pyles, D.; Paw U, K.

    2009-12-01

    The thermal environment in the plant canopy affects plants’ growth processes such as flowering and ripening. High temperatures often cause grain sterility and poor filling in serial crops, and reduce their production in tropical and temperate regions. With global warming predicted, these effects have become a major concern worldwide. In this study, we observed the plant body temperature profiles for the rice canopy and simulate them using a higher-order closure micrometeorological model to understand the relationship between plant temperatures and atmospheric condition. Experiments were conducted in rice paddy during 2007-summer season under warm temperate climate in Japan. Leaf temperatures at three different height (0.3, 0.5, 0.7m) and panicle temperatures at 0.9m were measured using fine-thermocouples. The UC Davis Advanced Canopy-Atmosphere-Soil Algorithm (ACASA) was used to calculate plant body temperature profiles in the canopy. ACASA is based on the radiation transfer, higher-order closure of turbulent equations for mass and heat exchange, and detailed plant physiological parameterization for the canopy-atmosphere-soil system. Water temperature was almost constant of 21-23 C throughout the summer because of continuous irrigation. Therefore, larger difference between air temperature at 2 m and water temperature was found on daytime. Observed leaf/panicle temperature was lower near the water surface and higher on upper layer in the canopy. Difference of temperatures between 0.3 m and 0.9 m was around 3-4 C for daytime, and around 1-2 C for nighttime. Calculated result of ACASA recreated these trends of plant temperature profile sufficiently. However, the relationship between plant and air temperature in the canopy was a little different from observed, i.e. observed leaf/panicle temperature were almost the same as air temperature, in contrast the simulated air temperature was 0.5-1.5 C higher than plant temperatures for the both of daytime and night time

  16. Application of artificial neural network coupled with genetic algorithm and simulated annealing to solve groundwater inflow problem to an advancing open pit mine

    NASA Astrophysics Data System (ADS)

    Bahrami, Saeed; Doulati Ardejani, Faramarz; Baafi, Ernest

    2016-05-01

    In this study, hybrid models are designed to predict groundwater inflow to an advancing open pit mine and the hydraulic head (HH) in observation wells at different distances from the centre of the pit during its advance. Hybrid methods coupling artificial neural network (ANN) with genetic algorithm (GA) methods (ANN-GA), and simulated annealing (SA) methods (ANN-SA), were utilised. Ratios of depth of pit penetration in aquifer to aquifer thickness, pit bottom radius to its top radius, inverse of pit advance time and the HH in the observation wells to the distance of observation wells from the centre of the pit were used as inputs to the networks. To achieve the objective two hybrid models consisting of ANN-GA and ANN-SA with 4-5-3-1 arrangement were designed. In addition, by switching the last argument of the input layer with the argument of the output layer of two earlier models, two new models were developed to predict the HH in the observation wells for the period of the mining process. The accuracy and reliability of models are verified by field data, results of a numerical finite element model using SEEP/W, outputs of simple ANNs and some well-known analytical solutions. Predicted results obtained by the hybrid methods are closer to the field data compared to the outputs of analytical and simple ANN models. Results show that despite the use of fewer and simpler parameters by the hybrid models, the ANN-GA and to some extent the ANN-SA have the ability to compete with the numerical models.

  17. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  18. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    PubMed

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology. PMID:27273293

  19. Recent advances in numerical simulation and control of asymmetric flows around slender bodies

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.; Wong, Tin-Chee; Sharaf, Hazem H.; Liu, C. H.

    1992-01-01

    The problems of asymmetric flow around slender bodies and its control are formulated using the unsteady, compressible, thin-layer or full Navier-Stokes equations which are solved using an implicit, flux-difference splitting, finite-volume scheme. The problem is numerically simulated for both locally-conical and three-dimensional flows. The numerical applications include studies of the effects of relative incidence, Mach number and Reynolds number on the flow asymmetry. For the control of flow asymmetry, the numerical simulation cover passive and active control methods. For the passive control, the effectiveness of vertical fins placed in the leeward plane of geometric symmetry and side strakes with different orientations is studied. For the active control, the effectiveness of normal and tangential flow injection and surface heating and a combination of these methods is studied.

  20. A review of recent advances in numerical simulations of microscale fuel processor for hydrogen production

    NASA Astrophysics Data System (ADS)

    Holladay, J. D.; Wang, Y.

    2015-05-01

    Microscale (<5 W) reformers for hydrogen production have been investigated for over a decade. These devices are intended to provide hydrogen for small fuel cells. Due to the reformer's small size, numerical simulations are critical to understand heat and mass transfer phenomena occurring in the systems and help guide the further improvements. This paper reviews the development of the numerical codes and details the reaction equations used. The majority of the devices utilized methanol as the fuel due to methanol's low reforming temperature and high conversion, although, there are several methane fueled systems. The increased computational power and more complex codes have led to improved accuracy of numerical simulations. Initial models focused on the reformer, while more recently, the simulations began including other unit operations such as vaporizers, inlet manifolds, and combustors. These codes are critical for developing the next generation systems. The systems reviewed included plate reactors, microchannel reactors, and annulus reactors for both wash-coated and packed bed systems.

  1. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGESBeta

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  2. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  3. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1984-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation. Previously announced in STAR as N84-13140

  4. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    ERIC Educational Resources Information Center

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  5. The parallelization of an advancing-front, all-quadrilateral meshing algorithm for adaptive analysis

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Cairncross, R.A.

    1995-11-01

    The ability to perform effective adaptive analysis has become a critical issue in the area of physical simulation. Of the multiple technologies required to realize a parallel adaptive analysis capability, automatic mesh generation is an enabling technology, filling a critical need in the appropriate discretization of a problem domain. The paving algorithm`s unique ability to generate a function-following quadrilateral grid is a substantial advantage in Sandia`s pursuit of a modified h-method adaptive capability. This characteristic combined with a strong transitioning ability allow the paving algorithm to place elements where an error function indicates more mesh resolution is needed. Although the original paving algorithm is highly serial, a two stage approach has been designed to parallelize the algorithm but also retain the nice qualities of the serial algorithm. The authors approach also allows the subdomain decomposition used by the meshing code to be shared with the finite element physics code, eliminating the need for data transfer across the processors between the analysis and remeshing steps. In addition, the meshed subdomains are adjusted with a dynamic load balancer to improve the original decomposition and maintain load efficiency each time the mesh has been regenerated. This initial parallel implementation assumes an approach of restarting the physics problem from time zero at each interaction, with a refined mesh adapting to the previous iterations objective function. The remeshing tools are being developed to enable real time remeshing and geometry regeneration. Progress on the redesign of the paving algorithm for parallel operation is discussed including extensions allowing adaptive control and geometry regeneration.

  6. Accurate Prediction of Advanced Liver Fibrosis Using the Decision Tree Learning Algorithm in Chronic Hepatitis C Egyptian Patients

    PubMed Central

    Hashem, Somaya; Esmat, Gamal; Elakel, Wafaa; Habashy, Shahira; Abdel Raouf, Safaa; Darweesh, Samar; Soliman, Mohamad; Elhefnawi, Mohamed; El-Adawy, Mohamed; ElHefnawi, Mahmoud

    2016-01-01

    Background/Aim. Respectively with the prevalence of chronic hepatitis C in the world, using noninvasive methods as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy is significantly increasing. The aim of this study is to combine the serum biomarkers and clinical information to develop a classification model that can predict advanced liver fibrosis. Methods. 39,567 patients with chronic hepatitis C were included and randomly divided into two separate sets. Liver fibrosis was assessed via METAVIR score; patients were categorized as mild to moderate (F0–F2) or advanced (F3-F4) fibrosis stages. Two models were developed using alternating decision tree algorithm. Model 1 uses six parameters, while model 2 uses four, which are similar to FIB-4 features except alpha-fetoprotein instead of alanine aminotransferase. Sensitivity and receiver operating characteristic curve were performed to evaluate the performance of the proposed models. Results. The best model achieved 86.2% negative predictive value and 0.78 ROC with 84.8% accuracy which is better than FIB-4. Conclusions. The risk of advanced liver fibrosis, due to chronic hepatitis C, could be predicted with high accuracy using decision tree learning algorithm that could be used to reduce the need to assess the liver biopsy. PMID:26880886

  7. Accurate Prediction of Advanced Liver Fibrosis Using the Decision Tree Learning Algorithm in Chronic Hepatitis C Egyptian Patients.

    PubMed

    Hashem, Somaya; Esmat, Gamal; Elakel, Wafaa; Habashy, Shahira; Abdel Raouf, Safaa; Darweesh, Samar; Soliman, Mohamad; Elhefnawi, Mohamed; El-Adawy, Mohamed; ElHefnawi, Mahmoud

    2016-01-01

    Background/Aim. Respectively with the prevalence of chronic hepatitis C in the world, using noninvasive methods as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy is significantly increasing. The aim of this study is to combine the serum biomarkers and clinical information to develop a classification model that can predict advanced liver fibrosis. Methods. 39,567 patients with chronic hepatitis C were included and randomly divided into two separate sets. Liver fibrosis was assessed via METAVIR score; patients were categorized as mild to moderate (F0-F2) or advanced (F3-F4) fibrosis stages. Two models were developed using alternating decision tree algorithm. Model 1 uses six parameters, while model 2 uses four, which are similar to FIB-4 features except alpha-fetoprotein instead of alanine aminotransferase. Sensitivity and receiver operating characteristic curve were performed to evaluate the performance of the proposed models. Results. The best model achieved 86.2% negative predictive value and 0.78 ROC with 84.8% accuracy which is better than FIB-4. Conclusions. The risk of advanced liver fibrosis, due to chronic hepatitis C, could be predicted with high accuracy using decision tree learning algorithm that could be used to reduce the need to assess the liver biopsy. PMID:26880886

  8. Numerical assessment of radiation binary targeted therapy for HER-2 positive breast cancers: advanced calculations and radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Sztejnberg Gonçalves-Carralves, Manuel L.; Jevremovic, Tatjana

    2007-07-01

    In our previous publication (Mundy et al 2006 Phys. Med. Biol. 51 1377) we have described the theoretical assessment of our novel approach in radiation binary targeted therapy for HER-2 positive breast cancers and summarized the future directions in this area of research. In this paper we advanced the numerical analysis to show the detailed radiation dose distribution for various neutron sources in combination with the required boron concentration and allowed radiation skin doses. We once again proved the feasibility of the concept and will use these data and conclusions to start with the experimental verifications.

  9. Development of advanced WTA (Weapon Target Assignment) algorithms for parallel processing. Final report

    SciTech Connect

    Castanon, D.A.

    1989-10-01

    The objective of weapon-target assignment (WTA) in a ballistic missile defense (BMD) system is to determine how defensive weapons should be assigned to boosters and reentry vehicles in order to maximize the survival of assets belonging to the U.S. and allied countries. The implied optimization problem requires consideration of a large number of potential weapon target assignments in order to select the most effective combination of assignments. The resulting WTA optimization problems are among the most complex encountered in mathematical programming. Indeed, simple versions of the WTA problem have been shown to be NP-complete, implying that the computations required achieve optimal solutions grow exponentially with the numbers of weapons and targets considered in the solution. The computational complexity of the WTA problem has motivated the development of heuristic algorithms that are not altogether satisfactory for use in Strategic Defense Systems (SDS). Some special cases of the WTA problem are not NP-complete and can be solved using standard optimization algorithms such as linear programming and maximum-marginal-return algorithms; these algorithms enjoy low computational requirements and therefore have been adopted as heuristics for solving more general WTA problems. However, experimental studies have demonstrated that these heuristic algorithms lead to significantly suboptimal solutions for certain scenarios.

  10. A review of recent advances of numerical simulations of microscale fuel processors for hydrogen production

    SciTech Connect

    Holladay, Jamelyn D.; Wang, Yong

    2015-05-01

    Microscale (<5W) reformers for hydrogen production have been investigated for over a decade. These devices are intended to provide hydrogen for small fuel cells. Due to the reformer’s small size, numerical simulations are critical to understand heat and mass transfer phenomena occurring in the systems. This paper reviews the development of the numerical codes and details the reaction equations used. The majority of the devices utilized methanol as the fuel due to methanol’s low reforming temperature and high conversion, although, there are several methane fueled systems. As computational power has decreased in cost and increased in availability, the codes increased in complexity and accuracy. Initial models focused on the reformer, while more recently, the simulations began including other unit operations such as vaporizers, inlet manifolds, and combustors. These codes are critical for developing the next generation systems. The systems reviewed included, plate reactors, microchannel reactors, annulus reactors, wash-coated, packed bed systems.

  11. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  12. [Adequacy of clinical interventions in patients with advanced and complex disease. Proposal of a decision making algorithm].

    PubMed

    Ameneiros-Lago, E; Carballada-Rico, C; Garrido-Sanjuán, J A; García Martínez, A

    2015-01-01

    Decision making in the patient with chronic advanced disease is especially complex. Health professionals are obliged to prevent avoidable suffering and not to add any more damage to that of the disease itself. The adequacy of the clinical interventions consists of only offering those diagnostic and therapeutic procedures appropriate to the clinical situation of the patient and to perform only those allowed by the patient or representative. In this article, the use of an algorithm is proposed that should serve to help health professionals in this decision making process. PMID:25666087

  13. Low Cost Design of an Advanced Encryption Standard (AES) Processor Using a New Common-Subexpression-Elimination Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Chih; Hsiao, Shen-Fu

    In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.

  14. Direct Numerical Simulation of Acoustic Waves Interacting with a Shock Wave in a Quasi-1D Convergent-Divergent Nozzle Using an Unstructured Finite Volume Algorithm

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.; Mankbadi, Reda R.

    1995-01-01

    Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.

  15. Advanced Techniques for Seismic Protection of Historical Buildings: Experimental and Numerical Approach

    SciTech Connect

    Mazzolani, Federico M.

    2008-07-08

    The seismic protection of historical and monumental buildings, namely dating back from the ancient age up to the 20th Century, is being looked at with greater and greater interest, above all in the Euro-Mediterranean area, its cultural heritage being strongly susceptible to undergo severe damage or even collapse due to earthquake. The cultural importance of historical and monumental constructions limits, in many cases, the possibility to upgrade them from the seismic point of view, due to the fear of using intervention techniques which could have detrimental effects on their cultural value. Consequently, a great interest is growing in the development of sustainable methodologies for the use of Reversible Mixed Technologies (RMTs) in the seismic protection of the existing constructions. RMTs, in fact, are conceived for exploiting the peculiarities of innovative materials and special devices, and they allow ease of removal when necessary. This paper deals with the experimental and numerical studies, framed within the EC PROHITECH research project, on the application of RMTs to the historical and monumental constructions mainly belonging to the cultural heritage of the Euro-Mediterranean area. The experimental tests and the numerical analyses are carried out at five different levels, namely full scale models, large scale models, sub-systems, devices, materials and elements.

  16. Numerical Study on Crossflow Printed Circuit Heat Exchanger for Advanced Small Modular Reactors

    SciTech Connect

    Yoon, Su-Jong; Sabharwall, Piyush; Kim, Eung-Soo

    2014-03-01

    Various fluids such as water, gases (helium), molten salts (FLiNaK, FLiBe) and liquid metal (sodium) are used as a coolant of advanced small modular reactors (SMRs). The printed circuit heat exchanger (PCHE) has been adopted as the intermediate and/or secondary heat exchanger of SMR systems because this heat exchanger is compact and effective. The size and cost of PCHE can be changed by the coolant type of each SMR. In this study, the crossflow PCHE analysis code for advanced small modular reactor has been developed for the thermal design and cost estimation of the heat exchanger. The analytical solution of single pass, both unmixed fluids crossflow heat exchanger model was employed to calculate a two dimensional temperature profile of a crossflow PCHE. The analytical solution of crossflow heat exchanger was simply implemented by using built in function of the MATLAB program. The effect of fluid property uncertainty on the calculation results was evaluated. In addition, the effect of heat transfer correlations on the calculated temperature profile was analyzed by taking into account possible combinations of primary and secondary coolants in the SMR systems. Size and cost of heat exchanger were evaluated for the given temperature requirement of each SMR.

  17. Numerical modelling of the groundwater inflow to an advancing open pit mine: Kolahdarvazeh pit, Central Iran.

    PubMed

    Bahrami, Saeed; Doulati Ardejani, Faramarz; Aslani, Soheyla; Baafi, Ernest

    2014-12-01

    The groundwater inflow into a mine during its life and after ceasing operations is one of the most important concerns of the mining industry. This paper presents a hydrogeological assessment of the Irankuh Zn-Pb mine at 20 km south of Esfahan and 1 km northeast of Abnil in west-Central Iran. During mine excavation, the upper impervious bed of a confined aquifer was broken and water at high-pressure flowed into an open pit mine associated with the Kolahdarvazeh deposit. The inflow rates were 6.7 and 1.4 m(3)/s at the maximum and minimum quantities, respectively. Permeability, storage coefficient, thickness and initial head of the fully saturated confined aquifer were 3.5 × 10(-4) m/s, 0.2, 30 m and 60 m, respectively. The hydraulic heads as a function of time were monitored at four observation wells in the vicinity of the pit over 19 weeks and at an observation well near a test well over 21 h. In addition, by measuring the rate of pumping out from the pit sump, at a constant head (usually equal to height of the pit floor), the real inflow rates to the pit were monitored. The main innovations of this work were to make comparison between numerical modelling using a finite element software called SEEP/W and actual data related to inflow and extend the applicability of the numerical model. This model was further used to estimate the hydraulic heads at the observation wells around the pit over 19 weeks during mining operations. Data from a pump-out test and observation wells were used for model calibration and verification. In order to evaluate the model efficiency, the modelling results of inflow quantity and hydraulic heads were compared to those from analytical solutions, as well as the field data. The mean percent error in relation to field data for the inflow quantity was 0.108. It varied between 1.16 and 1.46 for hydraulic head predictions, which are much lower values than the mean percent errors resulted from the analytical solutions (from 1.8 to 5

  18. Advanced numerical modeling and hybridization techniques for third-generation infrared detector pixel arrays

    NASA Astrophysics Data System (ADS)

    Schuster, Jonathan

    Infrared (IR) detectors are well established as a vital sensor technology for military, defense and commercial applications. Due to the expense and effort required to fabricate pixel arrays, it is imperative to develop numerical simulation models to perform predictive device simulations which assess device characteristics and design considerations. Towards this end, we have developed a robust three-dimensional (3D) numerical simulation model for IR detector pixel arrays. We used the finite-difference time-domain technique to compute the optical characteristics including the reflectance and the carrier generation rate in the device. Subsequently, we employ the finite element method to solve the drift-diffusion equations to compute the electrical characteristics including the I(V) characteristics, quantum efficiency, crosstalk and modulation transfer function. We use our 3D numerical model to study a new class of detector based on the nBn-architecture. This detector is a unipolar unity-gain barrier device consisting of a narrow-gap absorber layer, a wide-gap barrier layer, and a narrow-gap collector layer. We use our model to study the underlying physics of these devices and to explain the anomalously long lateral collection lengths for photocarriers measured experimentally. Next, we investigate the crosstalk in HgCdTe photovoltaic pixel arrays employing a photon-trapping (PT) structure realized with a periodic array of pillars intended to provide broadband operation. The PT region drastically reduces the crosstalk; making the use of the PT structures not only useful to obtain broadband operation, but also desirable for reducing crosstalk, especially in small pitch detector arrays. Then, the power and flexibility of the nBn architecture is coupled with a PT structure to engineer spectrally filtering detectors. Last, we developed a technique to reduce the cost of large-format, high performance HgCdTe detectors by nondestructively screen-testing detector arrays prior

  19. Evaluation of Temperature Gradient in Advanced Automated Directional Solidification Furnace (AADSF) by Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.

    1996-01-01

    A numerical model of heat transfer using combined conduction, radiation and convection in AADSF was used to evaluate temperature gradients in the vicinity of the crystal/melt interface for variety of hot and cold zone set point temperatures specifically for the growth of mercury cadmium telluride (MCT). Reverse usage of hot and cold zones was simulated to aid the choice of proper orientation of crystal/melt interface regarding residual acceleration vector without actual change of furnace location on board the orbiter. It appears that an additional booster heater will be extremely helpful to ensure desired temperature gradient when hot and cold zones are reversed. Further efforts are required to investigate advantages/disadvantages of symmetrical furnace design (i.e. with similar length of hot and cold zones).

  20. Numerical simulation of the reactive flow in advanced (HSR) combustors using KIVA-2

    NASA Technical Reports Server (NTRS)

    Winowich, Nicholas S.

    1991-01-01

    Recent work has been done with the goal of establishing ultralow emission aircraft gas turbine combustors. A significant portion of the effort is the development of three dimensional computational combustor models. The KIVA-II computer code which is based on the Implicit Continuous Eulerian Difference mesh Arbitrary Lagrangian Eulerian (ICED-ALE) numerical scheme is one of the codes selected by NASA to achieve these goals. This report involves a simulation of jet injection through slanted slots within the Rich burn/Quick quench/Lean burn (RQL) baseline experimental rig. The RQL combustor distinguishes three regions of combustion. This work specifically focuses on modeling the quick quench mixer region in which secondary injection air is introduced radially through 12 equally spaced slots around the mixer circumference. Steady state solutions are achieved with modifications to the KIVA-II program. Work currently underway will evaluate thermal mixing as a function of injection air velocity and angle of inclination of the slots.

  1. Numerical simulation of fine blanking process using fully coupled advanced constitutive equations with ductile damage

    NASA Astrophysics Data System (ADS)

    Labergere, C.; Saanouni, K.; Benafia, S.; Galmiche, J.; Sulaiman, H.

    2013-05-01

    This paper presents the modelling and adaptive numerical simulation of the fine blanking process. Thermodynamically-consistent constitutive equations, strongly coupled with ductile damage, together with specific boundary conditions (particular command of forces on blank holder and counterpunch) are presented. This model is implemented into ABAQUS/EXPLICIT using the Vumat user subroutine and connected with an adaptive 2D remeshing procedure. The different material parameters are identified for the steel S600MC using experimental tensile tests conducted until the final fracture. A parametric study aiming to examine the sensitivity of the process parameters (die radius, clearance die/punch) to the punch force and fracture surfaces topology (convex zone, sheared zone, fracture zone and the burr).

  2. Recent advances in methods for numerical solution of O.D.E. initial value problems

    NASA Technical Reports Server (NTRS)

    Bui, T. D.; Oppenheim, A. K.; Pratt, D. T.

    1984-01-01

    In the mathematical modeling of physical systems, it is often necessary to solve an initial value problem (IVP), consisting of a system of ordinary differential equations (ODE). A typical program produces approximate solutions at certain mesh points. Almost all existing codes try to control the local truncation error, while the user is really interested in controlling the true or global error. The present investigation provides a review of recent advances regarding the solution of the IVP, giving particular attention to stiff systems. Stiff phenomena are customarily defined in terms of the eigenvalues of the Jacobian. There are, however, some difficulties connected with this approach. It is pointed out that an estimate of the Lipschitz constant proves to be a very practical way to determine the stiffness of a problem.

  3. Image analysis algorithms for the advanced radiographic capability (ARC) grating tilt sensor at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Roberts, Randy S.; Bliss, Erlan S.; Rushford, Michael C.; Halpin, John M.; Awwal, Abdul A. S.; Leach, Richard R.

    2014-09-01

    The Advance Radiographic Capability (ARC) at the National Ignition Facility (NIF) is a laser system designed to produce a sequence of short pulses used to backlight imploding fuel capsules. Laser pulses from a short-pulse oscillator are dispersed in wavelength into long, low-power pulses, injected in the NIF main laser for amplification, and then compressed into high-power pulses before being directed into the NIF target chamber. In the target chamber, the laser pulses hit targets which produce x-rays used to backlight imploding fuel capsules. Compression of the ARC laser pulses is accomplished with a set of precision-surveyed optical gratings mounted inside of vacuum vessels. The tilt of each grating is monitored by a measurement system consisting of a laser diode, camera and crosshair, all mounted in a pedestal outside of the vacuum vessel, and a mirror mounted on the back of a grating inside the vacuum vessel. The crosshair is mounted in front of the camera, and a diffraction pattern is formed when illuminated with the laser diode beam reflected from the mirror. This diffraction pattern contains information related to relative movements between the grating and the pedestal. Image analysis algorithms have been developed to determine the relative movements between the gratings and pedestal. In the paper we elaborate on features in the diffraction pattern, and describe the image analysis algorithms used to monitor grating tilt changes. Experimental results are provided which indicate the high degree of sensitivity provided by the tilt sensor and image analysis algorithms.

  4. Science-Based Approach for Advancing Marine and Hydrokinetic Energy: Integrating Numerical Simulations with Experiments

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Kang, S.; Chamorro, L. P.; Hill, C.

    2011-12-01

    The field of MHK energy is still in its infancy lagging approximately a decade or more behind the technology and development progress made in wind energy engineering. Marine environments are characterized by complex topography and three-dimensional (3D) turbulent flows, which can greatly affect the performance and structural integrity of MHK devices and impact the Levelized Cost of Energy (LCoE). Since the deployment of multi-turbine arrays is envisioned for field applications, turbine-to-turbine interactions and turbine-bathymetry interactions need to be understood and properly modeled so that MHK arrays can be optimized on a site specific basis. Furthermore, turbulence induced by MHK turbines alters and interacts with the nearby ecosystem and could potentially impact aquatic habitats. Increased turbulence in the wake of MHK devices can also change the shear stress imposed on the bed ultimately affecting the sediment transport and suspension processes in the wake of these structures. Such effects, however, remain today largely unexplored. In this work a science-based approach integrating state-of-the-art experimentation with high-resolution computational fluid dynamics is proposed as a powerful strategy for optimizing the performance of MHK devices and assessing environmental impacts. A novel numerical framework is developed for carrying out Large-Eddy Simulation (LES) in arbitrarily complex domains with embedded MHK devices. The model is able to resolve the geometrical complexity of real-life MHK devices using the Curvilinear Immersed Boundary (CURVIB) method along with a wall model for handling the flow near solid surfaces. Calculations are carried out for an axial flow hydrokinetic turbine mounted on the bed of rectangular open channel on a grid with nearly 200 million grid nodes. The approach flow corresponds to fully developed turbulent open channel flow and is obtained from a separate LES calculation. The specific case corresponds to that studied

  5. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations

    PubMed Central

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-01-01

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

  6. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations.

    PubMed

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-08-13

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

  7. Recent Advancements In The Numerical Simulation Of Non-Equilibrium Flows With Application To Monatomic Gases

    NASA Astrophysics Data System (ADS)

    Kapper, M. G.; Cambier, J.-L.; Bultel, A.; Magin, T. E.

    2011-05-01

    This paper summarizes our current efforts in developing numerical methods for the study of non- equilibrium, high-enthalpy plasma. We describe the general approach used in the model development, some of the problems to be solved and benchmarks showing current capabilities. In particular, we review the recent development of a collisional-radiative model coupled with a single-fluid, two-temperature convection model for the transport of shock-heated argon along with extensions to krypton and xenon. The model is used in a systematic approach to examine the effects of the collision cross sections on the shock structure, including the relaxation layer and subsequent radiative-cooling regime. We review recent results obtained and comparisons with previous experimental results obtained at the University of Toronto’s Institute of Aerospace Studies (UTIAS) and the Australian National University (ANU), which serve as benchmarks to the model. We also show results when unsteady and multi-dimensional effects are included, highlighting the importance of coupling between convective transport and kinetic processes in nonequilibrium flows. We then look at extending the model to both nozzle and external flows to study expansion regimes.

  8. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    NASA Technical Reports Server (NTRS)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  9. Numerical Simulations of Optical Turbulence Using an Advanced Atmospheric Prediction Model: Implications for Adaptive Optics Design

    NASA Astrophysics Data System (ADS)

    Alliss, R.

    2014-09-01

    Optical turbulence (OT) acts to distort light in the atmosphere, degrading imagery from astronomical telescopes and reducing the data quality of optical imaging and communication links. Some of the degradation due to turbulence can be corrected by adaptive optics. However, the severity of optical turbulence, and thus the amount of correction required, is largely dependent upon the turbulence at the location of interest. Therefore, it is vital to understand the climatology of optical turbulence at such locations. In many cases, it is impractical and expensive to setup instrumentation to characterize the climatology of OT, so numerical simulations become a less expensive and convenient alternative. The strength of OT is characterized by the refractive index structure function Cn2, which in turn is used to calculate atmospheric seeing parameters. While attempts have been made to characterize Cn2 using empirical models, Cn2 can be calculated more directly from Numerical Weather Prediction (NWP) simulations using pressure, temperature, thermal stability, vertical wind shear, turbulent Prandtl number, and turbulence kinetic energy (TKE). In this work we use the Weather Research and Forecast (WRF) NWP model to generate Cn2 climatologies in the planetary boundary layer and free atmosphere, allowing for both point-to-point and ground-to-space seeing estimates of the Fried Coherence length (ro) and other seeing parameters. Simulations are performed using a multi-node linux cluster using the Intel chip architecture. The WRF model is configured to run at 1km horizontal resolution and centered on the Mauna Loa Observatory (MLO) of the Big Island. The vertical resolution varies from 25 meters in the boundary layer to 500 meters in the stratosphere. The model top is 20 km. The Mellor-Yamada-Janjic (MYJ) TKE scheme has been modified to diagnose the turbulent Prandtl number as a function of the Richardson number, following observations by Kondo and others. This modification

  10. Advancing Satellite-Based Flood Prediction in Complex Terrain Using High-Resolution Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.; Nikolopoulos, E. I.; Bartsotas, N. S.

    2015-12-01

    Floods constitute one of the most significant and frequent natural hazard in mountainous regions. Satellite-based precipitation products offer in many cases the only available source of QPE. However, satellite-based QPE over complex terrain suffer from significant bias that limits their applicability for hydrologic modeling. In this work we investigate the potential of a new correction procedure, which involves the use of high-resolution numerical weather prediction (NWP) model simulations to adjust satellite QPE. Adjustment is based on the pdf matching of satellite and NWP (used as reference) precipitation distribution. The impact of correction procedure on simulating the hydrologic response is examined for 15 storm events that generated floods over the mountainous Upper Adige region of Northern Italy. Atmospheric simulations were performed at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS). The proposed error correction procedure was then applied on the widely used TRMM 3B42 satellite precipitation product and the evaluation of the correction was based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2) available in the study area. Satellite QPE, before and after correction, are used to simulate flood response using ARFFS (Adige River Flood Forecasting System), a semi-distributed hydrologic model, which is used for operational flood forecasting in the region. Results showed that bias in satellite QPE before correction was significant and had a tremendous impact on the simulation of flood peak, however the correction procedure was able to reduce bias in QPE and therefore improve considerably the simulated flood hydrograph.

  11. LDSS-P: an advanced algorithm to extract functional short motifs associated with coordinated gene expression

    PubMed Central

    Ichida, Hiroyuki; Long, Sharon R.

    2016-01-01

    Identifying functional elements in promoter sequences is a major goal in computational and experimental genome biology. Here, we describe an algorithm, Local Distribution of Short Sequences for Prokaryotes (LDSS-P), to identify conserved short motifs located at specific positions in the promoters of co-expressed prokaryotic genes. As a test case, we applied this algorithm to a symbiotic nitrogen-fixing bacterium, Sinorhizobium meliloti. The LDSS-P profiles that overlap with the 5′ section of the extracytoplasmic function RNA polymerase sigma factor RpoE2 consensus sequences displayed a sharp peak between -34 and -32 from TSS positions. The corresponding genes overlap significantly with RpoE2 targets identified from previous experiments. We further identified several groups of genes that are co-regulated with characterized marker genes. Our data indicate that in S. meliloti, and possibly in other Rhizobiaceae species, the master cell cycle regulator CtrA may recognize an expanded motif (AACCAT), which is positionally shifted from the previously reported CtrA consensus sequence in Caulobacter crescentus. Bacterial one-hybrid experiments showed that base substitution in the expanded motif either increase or decrease the binding by CtrA. These results show the effectiveness of LDSS-P as a method to delineate functional promoter elements. PMID:27190233

  12. LDSS-P: an advanced algorithm to extract functional short motifs associated with coordinated gene expression.

    PubMed

    Ichida, Hiroyuki; Long, Sharon R

    2016-06-20

    Identifying functional elements in promoter sequences is a major goal in computational and experimental genome biology. Here, we describe an algorithm, Local Distribution of Short Sequences for Prokaryotes (LDSS-P), to identify conserved short motifs located at specific positions in the promoters of co-expressed prokaryotic genes. As a test case, we applied this algorithm to a symbiotic nitrogen-fixing bacterium, Sinorhizobium meliloti The LDSS-P profiles that overlap with the 5' section of the extracytoplasmic function RNA polymerase sigma factor RpoE2 consensus sequences displayed a sharp peak between -34 and -32 from TSS positions. The corresponding genes overlap significantly with RpoE2 targets identified from previous experiments. We further identified several groups of genes that are co-regulated with characterized marker genes. Our data indicate that in S. meliloti, and possibly in other Rhizobiaceae species, the master cell cycle regulator CtrA may recognize an expanded motif (AACCAT), which is positionally shifted from the previously reported CtrA consensus sequence in Caulobacter crescentus Bacterial one-hybrid experiments showed that base substitution in the expanded motif either increase or decrease the binding by CtrA. These results show the effectiveness of LDSS-P as a method to delineate functional promoter elements. PMID:27190233

  13. Advancing Efficient All-Electron Electronic Structure Methods Based on Numeric Atom-Centered Orbitals for Energy Related Materials

    NASA Astrophysics Data System (ADS)

    Blum, Volker

    This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.

  14. A Numerical Approach to Ion Channel Modelling Using Whole-Cell Voltage-Clamp Recordings and a Genetic Algorithm

    PubMed Central

    Gurkiewicz, Meron; Korngreen, Alon

    2007-01-01

    The activity of trans-membrane proteins such as ion channels is the essence of neuronal transmission. The currently most accurate method for determining ion channel kinetic mechanisms is single-channel recording and analysis. Yet, the limitations and complexities in interpreting single-channel recordings discourage many physiologists from using them. Here we show that a genetic search algorithm in combination with a gradient descent algorithm can be used to fit whole-cell voltage-clamp data to kinetic models with a high degree of accuracy. Previously, ion channel stimulation traces were analyzed one at a time, the results of these analyses being combined to produce a picture of channel kinetics. Here the entire set of traces from all stimulation protocols are analysed simultaneously. The algorithm was initially tested on simulated current traces produced by several Hodgkin-Huxley–like and Markov chain models of voltage-gated potassium and sodium channels. Currents were also produced by simulating levels of noise expected from actual patch recordings. Finally, the algorithm was used for finding the kinetic parameters of several voltage-gated sodium and potassium channels models by matching its results to data recorded from layer 5 pyramidal neurons of the rat cortex in the nucleated outside-out patch configuration. The minimization scheme gives electrophysiologists a tool for reproducing and simulating voltage-gated ion channel kinetics at the cellular level. PMID:17784781

  15. Numerical Viscous Flow Analysis of an Advanced Semispan Diamond-Wing Model at High-Life Conditions

    NASA Technical Reports Server (NTRS)

    Ghaffari, F.; Biedron, R. T.; Luckring, J. M.

    2002-01-01

    Turbulent Navier-Stokes computational results are presented for an advanced diamond wing semispan model at low speed, high-lift conditions. The numerical results are obtained in support of a wind-tunnel test that was conducted in the National Transonic Facility (NTF) at the NASA Langley Research Center. The model incorporated a generic fuselage and was mounted on the tunnel sidewall using a constant width standoff. The analyses include: (1) the numerical simulation of the NTF empty, tunnel flow characteristics; (2) semispan high-lift model with the standoff in the tunnel environment; (3) semispan high-lift model with the standoff and viscous sidewall in free air; and (4) semispan high-lift model without the standoff in free air. The computations were performed at conditions that correspond to a nominal approach and landing configuration. The wing surface pressure distributions computed for the model in both the tunnel and in free air agreed well with the corresponding experimental data and they both indicated small increments due to the wall interference effects. However, the wall interference effects were found to be more pronounced in the total measured and the computed lift, drag and pitching moment due to standard induced up-flow effects. Although the magnitudes of the computed forces and moment were slightly off compared to the measured data, the increments due the wall interference effects were predicted well. The numerical predictions are also presented on the combined effects of the tunnel sidewall boundary layer and the standoff geometry on the fuselage fore-body pressure distributions and the resulting impact on the overall configuration longitudinal aerodynamic characteristics.

  16. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  17. Dynamics of Numerics and CFD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    This lecture attempts to illustrate the basic ideas of how the recent advances in nonlinear dynamical systems theory (dynamics) can provide new insights into the understanding of numerical algorithms used in solving nonlinear differential equations (DEs). Examples will be given of the use of dynamics to explain unusual phenomena that occur in numerics. The inadequacy of the use of linearized analysis for the understanding of long time behavior of nonlinear problems will be illustrated, and the role of dynamics in studying the nonlinear stability, accuracy, convergence property and efficiency of using time- dependent approaches to obtaining steady-state numerical solutions in computational fluid dynamics (CFD) will briefly be explained.

  18. Facilitation of Third-party Development of Advanced Algorithms for Explosive Detection Using Workshops and Grand Challenges

    SciTech Connect

    Martz, H E; Crawford, C R; Beaty, J S; Castanon, D

    2011-02-15

    The US Department of Homeland Security (DHS) has requirements for future explosive detection scanners that include dealing with a larger number of threats, higher probability of detection, lower false alarm rates and lower operating costs. One tactic that DHS is pursuing to achieve these requirements is to augment the capabilities of the established security vendors with third-party algorithm developers. The purposes of this presentation are to review DHS's objectives for involving third parties in the development of advanced algorithms and then to discuss how these objectives are achieved using workshops and grand challenges. Terrorists are still trying and they are getting more sophisticated. There is a need to increase the number of smart people working on homeland security. Augmenting capabilities and capacities of system vendors with third-parties is one tactic. Third parties can be accessed via workshops and grand challenges. Successes have been achieved to date. There are issues that need to be resolved to further increase third party involvement.

  19. A spectral tau algorithm based on Jacobi operational matrix for numerical solution of time fractional diffusion-wave equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.; Doha, E. H.; Baleanu, D.; Ezz-Eldien, S. S.

    2015-07-01

    In this paper, an efficient and accurate spectral numerical method is presented for solving second-, fourth-order fractional diffusion-wave equations and fractional wave equations with damping. The proposed method is based on Jacobi tau spectral procedure together with the Jacobi operational matrix for fractional integrals, described in the Riemann-Liouville sense. The main characteristic behind this approach is to reduce such problems to those of solving systems of algebraic equations in the unknown expansion coefficients of the sought-for spectral approximations. The validity and effectiveness of the method are demonstrated by solving five numerical examples. Numerical examples are presented in the form of tables and graphs to make comparisons with the results obtained by other methods and with the exact solutions more easier.

  20. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  1. Advanced order management in ERM systems: the tic-tac-toe algorithm

    NASA Astrophysics Data System (ADS)

    Badell, Mariana; Fernandez, Elena; Puigjaner, Luis

    2000-10-01

    The concept behind improved enterprise resource planning systems (ERP) systems is the overall integration of the whole enterprise functionality into the management systems through financial links. Converting current software into real management decision tools requires crucial changes in the current approach to ERP systems. This evolution must be able to incorporate the technological achievements both properly and in time. The exploitation phase of plants needs an open web-based environment for collaborative business-engineering with on-line schedulers. Today's short lifecycles of products and processes require sharp and finely tuned management actions that must be guided by scheduling tools. Additionally, such actions must be able to keep track of money movements related to supply chain events. Thus, the necessary outputs require financial-production integration at the scheduling level as proposed in the new approach of enterprise management systems (ERM). Within this framework, the economical analysis of the due date policy and its optimization become essential to manage dynamically realistic and optimal delivery dates with price-time trade-off during the marketing activities. In this work we propose a scheduling tool with web-based interface conducted by autonomous agents when precise economic information relative to plant and business actions and their effects are provided. It aims to attain a better arrangement of the marketing and production events in order to face the bid/bargain process during e-commerce. Additionally, management systems require real time execution and an efficient transaction-oriented approach capable to dynamically adopt realistic and optimal actions to support marketing management. To this end the TicTacToe algorithm provides sequence optimization with acceptable tolerances in realistic time.

  2. Middle atmosphere project: A radiative heating and cooling algorithm for a numerical model of the large scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    Wehrbein, W. M.; Leovy, C. B.

    1981-01-01

    A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.

  3. A Numerical Algorithm to Calculate the Pressure Distribution of the TPS Front End Due to Desorption Induced by Synchrotron Radiation

    SciTech Connect

    Sheng, I. C.; Kuan, C. K.; Chen, Y. T.; Yang, J. Y.; Hsiung, G. Y.; Chen, J. R.

    2010-06-23

    The pressure distribution is an important aspect of a UHV subsystem in either a storage ring or a front end. The design of the 3-GeV, 400-mA Taiwan Photon Source (TPS) foresees outgassing induced by photons and due to a bending magnet and an insertion device. An algorithm to calculate the photon-stimulated absorption (PSD) due to highly energetic radiation from a synchrotron source is presented. Several results using undulator sources such as IU20 are also presented, and the pressure distribution is illustrated.

  4. Numerical analysis of second harmonic generation for THz-wave in a photonic crystal waveguide using a nonlinear FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Saito, Kyosuke; Tanabe, Tadao; Oyama, Yutaka

    2016-04-01

    We have presented a numerical analysis to describe the behavior of a second harmonic generation (SHG) in THz regime by taking into account for both linear and nonlinear optical susceptibility. We employed a nonlinear finite-difference-time-domain (nonlinear FDTD) method to simulate SHG output characteristics in THz photonic crystal waveguide based on semi insulating gallium phosphide crystal. Unique phase matching conditions originated from photonic band dispersions with low group velocity are appeared, resulting in SHG output characteristics. This numerical study provides spectral information of SHG output in THz PC waveguide. THz PC waveguides is one of the active nonlinear optical devices in THz regime, and nonlinear FDTD method is a powerful tool to design photonic nonlinear THz devices.

  5. Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media

    NASA Technical Reports Server (NTRS)

    Stamnes, Knut; Tsay, S.-CHEE; Jayaweera, Kolf; Wiscombe, Warren

    1988-01-01

    The transfer of monochromatic radiation in a scattering, absorbing, and emitting plane-parallel medium with a specified bidirectional reflectivity at the lower boundary is considered. The equations and boundary conditions are summarized. The numerical implementation of the theory is discussed with attention given to the reliable and efficient computation of eigenvalues and eigenvectors. Ways of avoiding fatal overflows and ill-conditioning in the matrix inversion needed to determine the integration constants are also presented.

  6. Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media

    NASA Astrophysics Data System (ADS)

    Stamnes, Knut; Tsay, S.-Chee; Jayaweera, Kolf; Wiscombe, Warren

    1988-06-01

    The transfer of monochromatic radiation in a scattering, absorbing, and emitting plane-parallel medium with a specified bidirectional reflectivity at the lower boundary is considered. The equations and boundary conditions are summarized. The numerical implementation of the theory is discussed with attention given to the reliable and efficient computation of eigenvalues and eigenvectors. Ways of avoiding fatal overflows and ill-conditioning in the matrix inversion needed to determine the integration constants are also presented.

  7. 3D-radiation hydro simulations of disk-planet interactions. I. Numerical algorithm and test cases

    NASA Astrophysics Data System (ADS)

    Klahr, H.; Kley, W.

    2006-01-01

    We study the evolution of an embedded protoplanet in a circumstellar disk using the 3D-Radiation Hydro code TRAMP, and treat the thermodynamics of the gas properly in three dimensions. The primary interest of this work lies in the demonstration and testing of the numerical method. We show how far numerical parameters can influence the simulations of gap opening. We study a standard reference model under various numerical approximations. Then we compare the commonly used locally isothermal approximation to the radiation hydro simulation using an equation for the internal energy. Models with different treatments of the mass accretion process are compared. Often mass accumulates in the Roche lobe of the planet creating a hydrostatic atmosphere around the planet. The gravitational torques induced by the spiral pattern of the disk onto the planet are not strongly affected in the average magnitude, but the short time scale fluctuations are stronger in the radiation hydro models. An interesting result of this work lies in the analysis of the temperature structure around the planet. The most striking effect of treating the thermodynamics properly is the formation of a hot pressure-supported bubble around the planet with a pressure scale height of H/R ≈ 0.5 rather than a thin Keplerian circumplanetary accretion disk.

  8. A high-order numerical algorithm for DNS of low-Mach-number reactive flows with detailed chemistry and quasi-spectral accuracy

    NASA Astrophysics Data System (ADS)

    Motheau, E.; Abraham, J.

    2016-05-01

    A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.

  9. Advancements in robust algorithm formulation for speaker identification of whispered speech

    NASA Astrophysics Data System (ADS)

    Fan, Xing

    for the transformation estimation in order to generate the pseudo whispered features. Both of the above two systems demonstrate a significant improvement over the baseline system on the evaluation data. This dissertation has therefore contributed to providing a scientific understanding of the differences between whispered and neutral speech as well as improved front-end processing and modeling method for speaker identification of whispered speech. Such advancements will ultimately contribute to improve the robustness of speech processing systems.

  10. Numerical Investigation of Cross Flow Phenomena in a Tight-Lattice Rod Bundle Using Advanced Interface Tracking Method

    NASA Astrophysics Data System (ADS)

    Zhang, Weizhong; Yoshida, Hiroyuki; Ose, Yasuo; Ohnuki, Akira; Akimoto, Hajime; Hotta, Akitoshi; Fujimura, Ken

    In relation to the design of an innovative FLexible-fuel-cycle Water Reactor (FLWR), investigation of thermal-hydraulic performance in tight-lattice rod bundles of the FLWR is being carried out at Japan Atomic Energy Agency (JAEA). The FLWR core adopts a tight triangular lattice arrangement with about 1 mm gap clearance between adjacent fuel rods. In view of importance of accurate prediction of cross flow between subchannels in the evaluation of the boiling transition (BT) in the FLWR core, this study presents a statistical evaluation of numerical simulation results obtained by a detailed two-phase flow simulation code, TPFIT, which employs an advanced interface tracking method. In order to clarify mechanisms of cross flow in such tight lattice rod bundles, the TPFIT is applied to simulate water-steam two-phase flow in two modeled subchannels. Attention is focused on instantaneous fluctuation characteristics of cross flow. With the calculation of correlation coefficients between differential pressure and gas/liquid mixing coefficients, time scales of cross flow are evaluated, and effects of mixing section length, flow pattern and gap spacing on correlation coefficients are investigated. Differences in mechanism between gas and liquid cross flows are pointed out.

  11. A pseudo-spectral algorithm and test cases for the numerical solution of the two-dimensional rotating Green-Naghdi shallow water equations

    NASA Astrophysics Data System (ADS)

    Pearce, J. D.; Esler, J. G.

    2010-10-01

    A pseudo-spectral algorithm is presented for the solution of the rotating Green-Naghdi shallow water equations in two spatial dimensions. The equations are first written in vorticity-divergence form, in order to exploit the fact that time-derivatives then appear implicitly in the divergence equation only. A nonlinear equation must then be solved at each time-step in order to determine the divergence tendency. The nonlinear equation is solved by means of a simultaneous iteration in spectral space to determine each Fourier component. The key to the rapid convergence of the iteration is the use of a good initial guess for the divergence tendency, which is obtained from polynomial extrapolation of the solution obtained at previous time-levels. The algorithm is therefore best suited to be used with a standard multi-step time-stepping scheme (e.g. leap-frog). Two test cases are presented to validate the algorithm for initial value problems on a square periodic domain. The first test is to verify cnoidal wave speeds in one-dimension against analytical results. The second test is to ensure that the Miles-Salmon potential vorticity is advected as a parcel-wise conserved tracer throughout the nonlinear evolution of a perturbed jet subject to shear instability. The algorithm is demonstrated to perform well in each test. The resulting numerical model is expected to be of use in identifying paradigmatic behavior in mesoscale flows in the atmosphere and ocean in which both vortical, nonlinear and dispersive effects are important.

  12. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem

    2015-10-01

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  13. Direct Numerical Simulation of Boiling Multiphase Flows: State-of-the-Art, Modeling, Algorithmic and Computer Needs

    SciTech Connect

    Nourgaliev R.; Knoll D.; Mousseau V.; Berry R.

    2007-04-01

    The state-of-the-art for Direct Numerical Simulation (DNS) of boiling multiphase flows is reviewed, focussing on potential of available computational techniques, the level of current success for their applications to model several basic flow regimes (film, pool-nucleate and wall-nucleate boiling -- FB, PNB and WNB, respectively). Then, we discuss multiphysics and multiscale nature of practical boiling flows in LWR reactors, requiring high-fidelity treatment of interfacial dynamics, phase-change, hydrodynamics, compressibility, heat transfer, and non-equilibrium thermodynamics and chemistry of liquid/vapor and fluid/solid-wall interfaces. Finally, we outline the framework for the {\\sf Fervent} code, being developed at INL for DNS of reactor-relevant boiling multiphase flows, with the purpose of gaining insight into the physics of multiphase flow regimes, and generating a basis for effective-field modeling in terms of its formulation and closure laws.

  14. Parallel supercomputing: Advanced methods, algorithms and software for large-scale problems. Final report, August 1, 1987--July 31, 1994

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1994-12-31

    The focus of the subject DOE sponsored research concerns parallel methods, algorithms, and software for complex applications such as those in coupled fluid flow and heat transfer. The research has been directed principally toward the solution of large-scale PDE problems using iterative solvers for finite differences and finite elements on advanced computer architectures. This work embraces parallel domain decomposition, element-by-element, spectral, and multilevel schemes with adaptive parameter determination, rational iteration and related issues. In addition to the fundamental questions related to developing new methods and mapping these to parallel computers, there are important software issues. The group has played a significant role in the development of software both for iterative solvers and also for finite element codes. The research in computational fluid dynamics (CFD) led to sustained multi-Gigaflop performance rates for parallel-vector computations of realistic large scale applications (not computational kernels alone). The main application areas for these performance studies have been two-dimensional problems in CFD. Over the course of this DOE sponsored research significant progress has been made. A report of the progression of the research is given and at the end of the report is a list of related publications and presentations over the entire grant period.

  15. Fast algorithms for transport models. Technical progress report

    SciTech Connect

    Manteuffel, T.A.

    1992-12-01

    The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).

  16. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGESBeta

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  17. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  18. Genetic algorithms and genetic programming for multiscale modeling: Applications in materials science and chemistry and advances in scalability

    NASA Astrophysics Data System (ADS)

    Sastry, Kumara Narasimha

    2007-03-01

    Effective and efficient rnultiscale modeling is essential to advance both the science and synthesis in a, wide array of fields such as physics, chemistry, materials science; biology, biotechnology and pharmacology. This study investigates the efficacy and potential of rising genetic algorithms for rnultiscale materials modeling and addresses some of the challenges involved in designing competent algorithms that solve hard problems quickly, reliably and accurately. In particular, this thesis demonstrates the use of genetic algorithms (GAs) and genetic programming (GP) in multiscale modeling with the help of two non-trivial case studies in materials science and chemistry. The first case study explores the utility of genetic programming (GP) in multi-timescaling alloy kinetics simulations. In essence, GP is used to bridge molecular dynamics and kinetic Monte Carlo methods to span orders-of-magnitude in simulation time. Specifically, GP is used to regress symbolically an inline barrier function from a limited set of molecular dynamics simulations to enable kinetic Monte Carlo that simulate seconds of real time. Results on a non-trivial example of vacancy-assisted migration on a surface of a face-centered cubic (fcc) Copper-Cobalt (CuxCo 1-x) alloy show that GP predicts all barriers with 0.1% error from calculations for less than 3% of active configurations, independent of type of potentials used to obtain the learning set of barriers via molecular dynamics. The resulting method enables 2--9 orders-of-magnitude increase in real-time dynamics simulations taking 4--7 orders-of-magnitude less CPU time. The second case study presents the application of multiobjective genetic algorithms (MOGAs) in multiscaling quantum chemistry simulations. Specifically, MOGAs are used to bridge high-level quantum chemistry and semiempirical methods to provide accurate representation of complex molecular excited-state and ground-state behavior. Results on ethylene and benzene---two common

  19. Keeping it Together: Advanced algorithms and software for magma dynamics (and other coupled multi-physics problems)

    NASA Astrophysics Data System (ADS)

    Spiegelman, M.; Wilson, C. R.

    2011-12-01

    A quantitative theory of magma production and transport is essential for understanding the dynamics of magmatic plate boundaries, intra-plate volcanism and the geochemical evolution of the planet. It also provides one of the most challenging computational problems in solid Earth science, as it requires consistent coupling of fluid and solid mechanics together with the thermodynamics of melting and reactive flows. Considerable work on these problems over the past two decades shows that small changes in assumptions of coupling (e.g. the relationship between melt fraction and solid rheology), can have profound changes on the behavior of these systems which in turn affects critical computational choices such as discretizations, solvers and preconditioners. To make progress in exploring and understanding this physically rich system requires a computational framework that allows more flexible, high-level description of multi-physics problems as well as increased flexibility in composing efficient algorithms for solution of the full non-linear coupled system. Fortunately, recent advances in available computational libraries and algorithms provide a platform for implementing such a framework. We present results from a new model building system that leverages functionality from both the FEniCS project (www.fenicsproject.org) and PETSc libraries (www.mcs.anl.gov/petsc) along with a model independent options system and gui, Spud (amcg.ese.ic.ac.uk/Spud). Key features from FEniCS include fully unstructured FEM with a wide range of elements; a high-level language (ufl) and code generation compiler (FFC) for describing the weak forms of residuals and automatic differentiation for calculation of exact and approximate jacobians. The overall strategy is to monitor/calculate residuals and jacobians for the entire non-linear system of equations within a global non-linear solve based on PETSc's SNES routines. PETSc already provides a wide range of solvers and preconditioners, from

  20. An advanced algorithm for construction of Integral Transport Matrix Method operators using accumulation of single cell coupling factors

    SciTech Connect

    Powell, B. P.; Azmy, Y. Y.

    2013-07-01

    The Integral Transport Matrix Method (ITMM) has been shown to be an effective method for solving the neutron transport equation in large domains on massively parallel architectures. In the limit of very large number of processors, the speed of the algorithm, and its suitability for unstructured meshes, i.e. other than an ordered Cartesian grid, is limited by the construction of four matrix operators required for obtaining the solution in each sub-domain. The existing algorithm used for construction of these matrix operators, termed the differential mesh sweep, is computationally expensive and was developed for a structured grid. This work proposes the use of a new algorithm for construction of these operators based on the construction of a single, fundamental matrix representing the transport of a particle along every possible path throughout the sub-domain mesh. Each of the operators is constructed by multiplying an element of this fundamental matrix by two factors dependent only upon the operator being constructed and on properties of the emitting and incident cells. The ITMM matrix operator construction time for the new algorithm is demonstrated to be shorter than the existing algorithm in all tested cases with both isotropic and anisotropic scattering considered. While also being a more efficient algorithm on a structured Cartesian grid, the new algorithm is promising in its geometric robustness and potential for being applied to an unstructured mesh, with the ultimate goal of application to an unstructured tetrahedral mesh on a massively parallel architecture. (authors)

  1. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  2. SU-E-T-313: The Accuracy of the Acuros XB Advanced Dose Calculation Algorithm for IMRT Dose Distributions in Head and Neck

    SciTech Connect

    Araki, F; Onizuka, R; Ohno, T; Tomiyama, Y; Hioki, K

    2014-06-01

    Purpose: To investigate the accuracy of the Acuros XB version 11 (AXB11) advanced dose calculation algorithm by comparing with Monte Caro (MC) calculations. The comparisons were performed with dose distributions for a virtual inhomogeneity phantom and intensity-modulated radiotherapy (IMRT) in head and neck. Methods: Recently, AXB based on Linear Boltzmann Transport Equation has been installed in the Eclipse treatment planning system (Varian Medical Oncology System, USA). The dose calculation accuracy of AXB11 was tested by the EGSnrc-MC calculations. In additions, AXB version 10 (AXB10) and Analytical Anisotropic Algorithm (AAA) were also used. First the accuracy of an inhomogeneity correction for AXB and AAA algorithms was evaluated by comparing with MC-calculated dose distributions for a virtual inhomogeneity phantom that includes water, bone, air, adipose, muscle, and aluminum. Next the IMRT dose distributions for head and neck were compared with the AXB and AAA algorithms and MC by means of dose volume histograms and three dimensional gamma analysis for each structure (CTV, OAR, etc.). Results: For dose distributions with the virtual inhomogeneity phantom, AXB was in good agreement with those of MC, except the dose in air region. The dose in air region decreased in order of MCalgorithms, ie: 0.700 MeV for MC, 0.711 MeV for AXB11, and 1.011 MeV for AXB 10. Since the AAA algorithm is based on the dose kernel of water, the doses in regions for air, bone, and aluminum considerably became higher than those of AXB and MC. The pass rates of the gamma analysis for IMRT dose distributions in head and neck were similar to those of MC in order of AXB11

  3. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  4. Evaluation of the Advanced-Canopy-Atmosphere-Surface Algorithm (ACASA Model) Using Eddy Covariance Technique Over Sparse Canopy

    NASA Astrophysics Data System (ADS)

    Marras, S.; Spano, D.; Sirca, C.; Duce, P.; Snyder, R.; Pyles, R. D.; Paw U, K. T.

    2008-12-01

    Land surface models are usually used to quantify energy and mass fluxes between terrestrial ecosystems and atmosphere on micro- and regional scales. One of the most elaborate land surface models for flux modelling is the Advanced Canopy-Atmosphere-Soil Algorithm (ACASA) model, which provides micro-scale as well as regional-scale fluxes when imbedded in a meso-scale meteorological model (e.g., MM5 or WRF). The model predicts vegetation conditions and changes with time due to plant responses to environment variables. In particular, fluxes and profiles of heat, water vapor, carbon and momentum within and above canopy are estimated using third-order equations. It also estimates turbulent profiles of velocity, temperature, humidity within and above canopy, and CO2 fluxes are estimated using a combination of Ball-Berry and Farquhar equations. The ACASA model is also able to include the effects of water stress on stomata, transpiration and CO2 assimilation. ACASA model is unique because it separates canopy domain into twenty atmospheric layers (ten layers within the canopy and ten layers above the canopy), and the soil is partitioned into fifteen layers of variable thickness. The model was mainly used over dense canopies in the past, so the aim of this work was to test the ACASA model over a sparse canopy as Mediterranean maquis. Vegetation is composed by sclerophyllous species of shrubs that are always green, with leathery leaves, small height, with a moderately sparse canopy, and that are tolerant at water stress condition. Eddy Covariance (EC) technique was used to collect continuous data for more than 3 years period. Field measurements were taken in a natural maquis site located near Alghero, Sardinia, Italy and they were used to parameterize and validate the model. The input values were selected by running the model several times varying the one parameter per time. A second step in the parameterization process was the simultaneously variation of some parameters

  5. Overview of the NOAA/NASA advanced very high resolution radiometer Pathfinder algorithm for sea surface temperature and associated matchup database

    NASA Astrophysics Data System (ADS)

    Kilpatrick, K. A.; Podestá, G. P.; Evans, R.

    2001-05-01

    The National Oceanic and Atmospheric Administration (NOAA)/NASA Oceans Pathfinder sea surface temperature (SST) data are derived from measurements made by the advanced very high resolution radiometers (AVHRRs) on board the NOAA 7, 9, 11, and 14 polar orbiting satellites. All versions of the Pathfinder SST algorithm are based on the NOAA/National Environmental Satellite Data and Information Service nonlinear SST operational algorithm (NLSST). Improvements to the NLSST operational algorithm developed by the Pathfinder program include the use of monthly calibration coefficients selected on the basis of channel brightness temperature difference (T4-T5). This channel difference is used as a proxy for water vapor regime. The latest version (version 4.2) of the Pathfinder processing includes the use of decision trees to determine objectively pixel cloud contamination and quality level (0-7) of the SST retrieval. The 1985-1998 series of AVHRR global measurements has been reprocessed using the Pathfinder version 4.2 processing protocol and is available at various temporal and spatial resolutions from NASA's Jet Propulsion Laboratory Distributed Active Archive Center. One of the highlights of the Pathfinder program is that in addition to the daily global area coverage fields, a matchup database of coincident in situ buoy and satellite SST observations also is made available for independent algorithm development and validation.

  6. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  7. Numerical Modeling for Hole-Edge Cracking of Advanced High-Strength Steels (AHSS) Components in the Static Bend Test

    NASA Astrophysics Data System (ADS)

    Kim, Hyunok; Mohr, William; Yang, Yu-Ping; Zelenak, Paul; Kimchi, Menachem

    2011-08-01

    Numerical modeling of local formability, such as hole-edge cracking and shear fracture in bending of AHSS, is one of the challenging issues for simulation engineers for prediction and evaluation of stamping and crash performance of materials. This is because continuum-mechanics-based finite element method (FEM) modeling requires additional input data, "failure criteria" to predict the local formability limit of materials, in addition to the material flow stress data input for simulation. This paper presents a numerical modeling approach for predicting hole-edge failures during static bend tests of AHSS structures. A local-strain-based failure criterion and a stress-triaxiality-based failure criterion were developed and implemented in LS-DYNA simulation code to predict hole-edge failures in component bend tests. The holes were prepared using two different methods: mechanical punching and water-jet cutting. In the component bend tests, the water-jet trimmed hole showed delayed fracture at the hole-edges, while the mechanical punched hole showed early fracture as the bending angle increased. In comparing the numerical modeling and test results, the load-displacement curve, the displacement at the onset of cracking, and the final crack shape/length were used. Both failure criteria also enable the numerical model to differentiate between the local formability limit of mechanical-punched and water-jet-trimmed holes. The failure criteria and static bend test developed here are useful to evaluate the local formability limit at a structural component level for automotive crash tests.

  8. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  9. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. PMID:26337741

  10. Disruptive Innovation in Numerical Hydrodynamics

    SciTech Connect

    Waltz, Jacob I.

    2012-09-06

    We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.

  11. Extending the dynamic range of phase contrast magnetic resonance velocity imaging using advanced higher-dimensional phase unwrapping algorithms

    PubMed Central

    Salfity, M.F; Huntley, J.M; Graves, M.J; Marklund, O; Cusack, R; Beauregard, D.A

    2005-01-01

    Phase contrast magnetic resonance velocity imaging is a powerful technique for quantitative in vivo blood flow measurement. Current practice normally involves restricting the sensitivity of the technique so as to avoid the problem of the measured phase being ‘wrapped’ onto the range −π to +π. However, as a result, dynamic range and signal-to-noise ratio are sacrificed. Alternatively, the true phase values can be estimated by a phase unwrapping process which consists of adding integral multiples of 2π to the measured wrapped phase values. In the presence of noise and data undersampling, the phase unwrapping problem becomes non-trivial. In this paper, we investigate the performance of three different phase unwrapping algorithms when applied to three-dimensional (two spatial axes and one time axis) phase contrast datasets. A simple one-dimensional temporal unwrapping algorithm, a more complex and robust three-dimensional unwrapping algorithm and a novel velocity encoding unwrapping algorithm which involves unwrapping along a fourth dimension (the ‘velocity encoding’ direction) are discussed, and results from the three are presented and compared. It is shown that compared to the traditional approach, both dynamic range and signal-to-noise ratio can be increased by a factor of up to five times, which demonstrates considerable promise for a possible eventual clinical implementation. The results are also of direct relevance to users of any other technique delivering time-varying two-dimensional phase images, such as dynamic speckle interferometry and synthetic aperture radar. PMID:16849270

  12. GOAL: an inverse toxicity-related algorithm for daily clinical practice decision making in advanced kidney cancer.

    PubMed

    Bracarda, Sergio; Sisani, Michele; Marrocolo, Francesca; Hamzaj, Alketa; del Buono, Sabrina; De Simone, Valeria

    2014-03-01

    Metastatic renal cell carcinoma (mRCC), considered almost an orphan disease only six years ago, appears today a very dynamic pathology. The recently switch to the actual overcrowded scenario defined by seven active drugs has driven physicians to an incertitude status, due to difficulties in defining the best possible treatment strategy. This situation is mainly related to the absence of predictive biomarkers for any available or new therapy. Such issue, associated with the nearly absence of published face-to-face studies, draws a complex picture frame. In order to solve this dilemma, decisional algorithms tailored on drug efficacy data and patient profile are recognized as very useful tools. These approaches try to select the best therapy suitable for every patient profile. On the contrary, the present review has the "goal" to suggest a reverse approach: basing on the pivotal studies, post-marketing surveillance reports and our experience, we defined the polarizing toxicity (the most frequent toxicity in the light of clinical experience) for every single therapy, creating a new algorithm able to identify the patient profile, mainly comorbidities, unquestionably unsuitable for each single agent presently available for either the first- or the second-line therapy. The GOAL inverse decision-making algorithm, proposed at the end of this review, allows to select the best therapy for mRCC by reducing the risk of limiting toxicities. PMID:24309065

  13. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  14. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  15. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  16. Advanced algorithms and high-performance testbed for large-scale site characterization and subsurface target detection using airborne ground-penetrating SAR

    NASA Astrophysics Data System (ADS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1999-08-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, JPL, Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field, in Colorado, by using SRI airborne, ground penetrating, SAR. The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance testbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and maximum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accuracy UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data. In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma Proving Ground, AZ, acquired by SRI SAR.

  17. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  18. Design of an advanced positron emission tomography detector system and algorithms for imaging small animal models of human disease

    NASA Astrophysics Data System (ADS)

    Foudray, Angela Marie Klohs

    Detecting, quantifying and visualizing biochemical mechanism in a living system without perturbing function is the goal of the instrument and algorithms designed in this thesis. Biochemical mechanisms of cells have long been known to be dependent on the signals they receive from their environment. Studying biological processes of cells in-vitro can vastly distort their function, since you are removing them from their natural chemical signaling environment. Mice have become the biological system of choice for various areas of biomedical research due to their genetic and physiological similarities with humans, the relatively low cost of their care, and their quick breeding cycle. Drug development and efficacy assessment along with disease detection, management, and mechanism research all have benefited from the use of small animal models of human disease. A high resolution, high sensitivity, three-dimensional (3D) positioning positron emission tomography (PET) detector system was designed through device characterization and Monte Carlo simulation. Position-sensitive avalanche photodiodes (PSAPDs) were characterized in various packaging configurations; coupled to various configurations of lutetium oxyorthosilicate (LSO) scintillation crystals. Forty novelly packaged final design devices were constructed and characterized, each providing characteristics superior to commercially available scintillation detectors used in small animal imaging systems: ˜1mm crystal identification, 14-15% of 511 keV energy resolution, and averaging 1.9 to 5.6 ns coincidence time resolution. A closed-cornered box-shaped detector configuration was found to provide optimal photon sensitivity (˜10.5% in the central plane) using dual LSO-PSAPD scintillation detector modules and Monte Carlo simulation. Standard figures of merit were used to determine optimal system acquisition parameters. A realistic model for constituent devices was developed for understanding the signals reported by the

  19. A Review and Advance Technology in Multi-Area Automatic Generation Control by Using Minority Charge Carrier Inspired Algorithm

    NASA Astrophysics Data System (ADS)

    Madichetty, Sreedhar; Panda, Susmita; Mishra, Sambeet; Dasgupta, Abhijit

    2013-11-01

    This article deals with automatic generation control of a multi-area interconnected thermal system in different modes using intelligent integral and proportional-integral controllers. Appropriate generation rate constraint has been considered for the thermal generation plants. The two cumulated thermal areas are considered with reheat turbines. Performances of reheat turbine on dynamic responses have been investigated. Further, selection of suitable integral and proportional-integral controllers has been investigated with a minority charge carrier inspired algorithm. Cumulative system performance is examined considering with different load perturbation in both cumulative thermal areas. Further, system is investigated with different area control errors, and results are explored.

  20. Advanced numerical technique for analysis of surface and bulk acoustic waves in resonators using periodic metal gratings

    NASA Astrophysics Data System (ADS)

    Naumenko, Natalya F.

    2014-09-01

    A numerical technique characterized by a unified approach for the analysis of different types of acoustic waves utilized in resonators in which a periodic metal grating is used for excitation and reflection of such waves is described. The combination of the Finite Element Method analysis of the electrode domain with the Spectral Domain Analysis (SDA) applied to the adjacent upper and lower semi-infinite regions, which may be multilayered and include air as a special case of a dielectric material, enables rigorous simulation of the admittance in resonators using surface acoustic waves, Love waves, plate modes including Lamb waves, Stonely waves, and other waves propagating along the interface between two media, and waves with transient structure between the mentioned types. The matrix formalism with improved convergence incorporated into SDA provides fast and robust simulation for multilayered structures with arbitrary thickness of each layer. The described technique is illustrated by a few examples of its application to various combinations of LiNbO3, isotropic silicon dioxide and silicon with a periodic array of Cu electrodes. The wave characteristics extracted from the admittance functions change continuously with the variation of the film and plate thicknesses over wide ranges, even when the wave nature changes. The transformation of the wave nature with the variation of the layer thicknesses is illustrated by diagrams and contour plots of the displacements calculated at resonant frequencies.

  1. Advanced Tsunami Numerical Simulations and Energy Considerations by use of 3D-2D Coupled Models: The October 11, 1918, Mona Passage Tsunami

    NASA Astrophysics Data System (ADS)

    López-Venegas, Alberto M.; Horrillo, Juan; Pampell-Manis, Alyssa; Huérfano, Victor; Mercado, Aurelio

    2015-06-01

    The most recent tsunami observed along the coast of the island of Puerto Rico occurred on October 11, 1918, after a magnitude 7.2 earthquake in the Mona Passage. The earthquake was responsible for initiating a tsunami that mostly affected the northwestern coast of the island. Runup values from a post-tsunami survey indicated the waves reached up to 6 m. A controversy regarding the source of the tsunami has resulted in several numerical simulations involving either fault rupture or a submarine landslide as the most probable cause of the tsunami. Here we follow up on previous simulations of the tsunami from a submarine landslide source off the western coast of Puerto Rico as initiated by the earthquake. Improvements on our previous study include: (1) higher-resolution bathymetry; (2) a 3D-2D coupled numerical model specifically developed for the tsunami; (3) use of the non-hydrostatic numerical model NEOWAVE (non-hydrostatic evolution of ocean WAVE) featuring two-way nesting capabilities; and (4) comprehensive energy analysis to determine the time of full tsunami wave development. The three-dimensional Navier-Stokes model tsunami solution using the Navier-Stokes algorithm with multiple interfaces for two fluids (water and landslide) was used to determine the initial wave characteristic generated by the submarine landslide. Use of NEOWAVE enabled us to solve for coastal inundation, wave propagation, and detailed runup. Our results were in agreement with previous work in which a submarine landslide is favored as the most probable source of the tsunami, and improvement in the resolution of the bathymetry yielded inundation of the coastal areas that compare well with values from a post-tsunami survey. Our unique energy analysis indicates that most of the wave energy is isolated in the wave generation region, particularly at depths near the landslide, and once the initial wave propagates from the generation region its energy begins to stabilize.

  2. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  3. Advancements of in-flight mass moment of inertia and structural deflection algorithms for satellite attitude simulators

    NASA Astrophysics Data System (ADS)

    Wright, Jonathan W.

    Experimental satellite attitude simulators have long been used to test and analyze control algorithms in order to drive down risk before implementation on an operational satellite. Ideally, the dynamic response of a terrestrial-based experimental satellite attitude simulator would be similar to that of an on-orbit satellite. Unfortunately, gravitational disturbance torques and poorly characterized moments of inertia introduce uncertainty into the system dynamics leading to questionable attitude control algorithm experimental results. This research consists of three distinct, but related contributions to the field of developing robust satellite attitude simulators. In the first part of this research, existing approaches to estimate mass moments and products of inertia are evaluated followed by a proposition and evaluation of a new approach that increases both the accuracy and precision of these estimates using typical on-board satellite sensors. Next, in order to better simulate the micro-torque environment of space, a new approach to mass balancing satellite attitude simulator is presented, experimentally evaluated, and verified. Finally, in the third area of research, we capitalize on the platform improvements to analyze a control moment gyroscope (CMG) singularity avoidance steering law. Several successful experiments were conducted with the CMG array at near-singular configurations. An evaluation process was implemented to verify that the platform remained near the desired test momentum, showing that the first two components of this research were effective in allowing us to conduct singularity avoidance experiments in a representative space-like test environment.

  4. Numerical Modeling for Springback Predictions by Considering the Variations of Elastic Modulus in Stamping Advanced High-Strength Steels (AHSS)

    NASA Astrophysics Data System (ADS)

    Kim, Hyunok; Kimchi, Menachem

    2011-08-01

    This paper presents a numerical modeling approach for predicting springback by considering the variations of elastic modulus on springback in stamping AHSS. Various stamping tests and finite-element method (FEM) simulation codes were used in this study. The cyclic loading-unloading tensile tests were conducted to determine the variations of elastic modulus for dual-phase (DP) 780 sheet steel. The biaxial bulge test was used to obtain plastic flow stress data. The non-linear reduction of elastic modulus for increasing the plastic strain was formulated by using the Yoshida model that was implemented in FEM simulations for springback. To understand the effects of material properties on springback, experiments were conducted with a simple geometry such as U-shape bending and the more complex geometry such as the curved flanging and S-rail stamping. Different measurement methods were used to confirm the final part geometry. Two different commercial FEM codes, LS-DYNA and DEFORM, were used to compare the experiments. The variable elastic modulus improved springback predictions in U-shape bending and curved flanging tests compared to FEM with the constant elastic modulus. However, in S-rail stamping tests, both FEM models with the isotropic hardening model showed limitations in predicting the sidewall curl of the S-rail part after springback. To consider the kinematic hardening and Bauschinger effects that result from material bending-unbending in S-rail stamping, the Yoshida model was used for FEM simulation of S-rail stamping and springback. The FEM predictions showed good improvement in correlating with experiments.

  5. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1995-01-01

    Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

  6. Numerical evaluation of longitudinal motions of Wigley hulls advancing in waves by using Bessho form translating-pulsating source Green'S function

    NASA Astrophysics Data System (ADS)

    Xiao, Wenbin; Dong, Wencai

    2016-06-01

    In the framework of 3D potential flow theory, Bessho form translating-pulsating source Green's function in frequency domain is chosen as the integral kernel in this study and hybrid source-and-dipole distribution model of the boundary element method is applied to directly solve the velocity potential for advancing ship in regular waves. Numerical characteristics of the Green function show that the contribution of local-flow components to velocity potential is concentrated at the nearby source point area and the wave component dominates the magnitude of velocity potential in the far field. Two kinds of mathematical models, with or without local-flow components taken into account, are adopted to numerically calculate the longitudinal motions of Wigley hulls, which demonstrates the applicability of translating-pulsating source Green's function method for various ship forms. In addition, the mesh analysis of discrete surface is carried out from the perspective of ship-form characteristics. The study shows that the longitudinal motion results by the simplified model are somewhat greater than the experimental data in the resonant zone, and the model can be used as an effective tool to predict ship seakeeping properties. However, translating-pulsating source Green function method is only appropriate for the qualitative analysis of motion response in waves if the ship geometrical shape fails to satisfy the slender-body assumption.

  7. Algorithmic Differentiation for Calculus-based Optimization

    NASA Astrophysics Data System (ADS)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  8. Towards Direct Numerical Simulation of mass and energy fluxes at the soil-atmospheric interface with advanced Lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Krafczyk, Manfred; Geier, Martin; Schönherr, Martin

    2014-05-01

    The quantification of soil evaporation and of soil water content dynamics near the soil surface are critical in the physics of land-surface processes on many scales and are dominated by multi-component and multi-phase mass and energy fluxes between the ground and the atmosphere. Although it is widely recognized that both liquid and gaseous water movement are fundamental factors in the quantification of soil heat flux and surface evaporation, their computation has only started to be taken into account using simplified macroscopic models. As the flow field over the soil can be safely considered as turbulent, it would be natural to study the detailed transient flow dynamics by means of Large Eddy Simulation (LES [1]) where the three-dimensional flow field is resolved down to the laminar sub-layer. Yet this requires very fine resolved meshes allowing a grid resolution of at least one order of magnitude below the typical grain diameter of the soil under consideration. In order to gain reliable turbulence statistics, up to several hundred eddy turnover times have to be simulated which adds up to several seconds of real time. Yet, the time scale of the receding saturated water front dynamics in the soil is on the order of hours. Thus we are faced with the task of solving a transient turbulent flow problem including the advection-diffusion of water vapour over the soil-atmospheric interface represented by a realistic tomographic reconstruction of a real porous medium taken from laboratory probes. Our flow solver is based on the Lattice Boltzmann method (LBM) [2] which has been extended by a Cumulant approach similar to the one described in [3,4] to minimize the spurious coupling between the degrees of freedom in previous LBM approaches and can be used as an implicit LES turbulence model due to its low numerical dissipation and increased stability at high Reynolds numbers. The kernel has been integrated into the research code Virtualfluids [5] and delivers up to 30% of the

  9. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  10. Advanced Control Algorithms for Compensating the Phase Distortion Due to Transport Delay in Human-Machine Systems

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline

  11. A preliminary numerical evaluation of a parallel algorithm for approximating the values and subgradients of the recourse function in a stochastic program with complete recourse

    SciTech Connect

    Lessor, K.S.

    1988-08-26

    The parallel algorithm of Ariyawansa, Sorensen, and Wets for approximating the values and subgradients of the recourse function in a stochastic program with complete recourse is implemented and timing results are reported for limited experimental trials. 14 refs., 6 figs., 8 tabs.

  12. 2D Flood Modelling Using Advanced Terrain Analysis Techniques And A Fully Continuous DEM-Based Rainfall-Runoff Algorithm

    NASA Astrophysics Data System (ADS)

    Nardi, F.; Grimaldi, S.; Petroselli, A.

    2012-12-01

    Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.

  13. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    SciTech Connect

    Razali, Azhani Mohd Abdullah, Jaafar

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  14. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    NASA Astrophysics Data System (ADS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  15. Final Progress Report: Collaborative Research: Decadal-to-Centennial Climate & Climate Change Studies with Enhanced Variable and Uniform Resolution GCMs Using Advanced Numerical Techniques

    SciTech Connect

    Fox-Rabinovitz, M; Cote, J

    2009-06-05

    The joint U.S-Canadian project has been devoted to: (a) decadal climate studies using developed state-of-the-art GCMs (General Circulation Models) with enhanced variable and uniform resolution; (b) development and implementation of advanced numerical techniques; (c) research in parallel computing and associated numerical methods; (d) atmospheric chemistry experiments related to climate issues; (e) validation of regional climate modeling strategies for nested- and stretched-grid models. The variable-resolution stretched-grid (SG) GCMs produce accurate and cost-efficient regional climate simulations with mesoscale resolution. The advantage of the stretched grid approach is that it allows us to preserve the high quality of both global and regional circulations while providing consistent interactions between global and regional scales and phenomena. The major accomplishment for the project has been the successful international SGMIP-1 and SGMIP-2 (Stretched-Grid Model Intercomparison Project, phase-1 and phase-2) based on this research developments and activities. The SGMIP provides unique high-resolution regional and global multi-model ensembles beneficial for regional climate modeling and broader modeling community. The U.S SGMIP simulations have been produced using SciDAC ORNL supercomputers. Collaborations with other international participants M. Deque (Meteo-France) and J. McGregor (CSIRO, Australia) and their centers and groups have been beneficial for the strong joint effort, especially for the SGMIP activities. The WMO/WCRP/WGNE endorsed the SGMIP activities in 2004-2008. This project reflects a trend in the modeling and broader communities to move towards regional and sub-regional assessments and applications important for the U.S. and Canadian public, business and policy decision makers, as well as for international collaborations on regional, and especially climate related issues.

  16. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  17. IMPROVED GROUND TRUTH IN SOUTHERN ASIA USING IN-COUNTRY DATA, ANALYST WAVEFORM REVIEW, AND ADVANCED ALGORITHMS

    SciTech Connect

    Engdahl, Eric, R.; Bergman, Eric, A.; Myers, Stephen, C.; Ryall, Floriana

    2009-06-19

    respective errors. This is a significant advance, as outliers and future events with apparently anomalous depths can be readily identified and, if necessary, further investigated. The patterns of reliable focal depth distributions have been interpreted in the context of Middle Eastern active tectonics. Most earthquakes in the Iranian continental lithosphere occur in the upper crust, less than about 25-30 km in depth, with the crustal shortening produced by continental collision apparently accommodated entirely by thickening and distributed deformation rather than by subduction of crust into the mantle. However, intermediate-depth earthquakes associated with subducted slab do occur across the central Caspian Sea and beneath the Makran coast. A multiple-event relocation technique, specialized to use different kinds of near-source data, is used to calibrate the locations of 24 clusters containing 901 events drawn from the seismicity catalog. The absolute locations of these clusters are fixed either by comparing the pattern of relocated earthquakes with mapped fault geometry, by using one or more cluster events that have been accurately located independently by a local seismic network or aftershock deployment, by using InSAR data to determine the rupture zone of shallow earthquakes, or by some combination of these near-source data. This technique removes most of the systematic bias in single-event locations done with regional and teleseismic data, resulting in 624 calibrated events with location uncertainties of 5 km or better at the 90% confidence level (GT590). For 21 clusters (847 events) that are calibrated in both location and origin time we calculate empirical travel times, relative to a standard 1-D travel time model (ak135), and investigate event to station travel-time anomalies as functions of epicentral distance and azimuth. Substantial travel-time anomalies are seen in the Iran region which make accurate locations impossible unless observing stations are at very short

  18. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  19. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  20. Numerical analysis of bifurcations

    SciTech Connect

    Guckenheimer, J.

    1996-06-01

    This paper is a brief survey of numerical methods for computing bifurcations of generic families of dynamical systems. Emphasis is placed upon algorithms that reflect the structure of the underlying mathematical theory while retaining numerical efficiency. Significant improvements in the computational analysis of dynamical systems are to be expected from more reliance of geometric insight coming from dynamical systems theory. {copyright} {ital 1996 American Institute of Physics.}

  1. Development of a numerical scheme to predict geomagnetic storms after intense solar events and geomagnetic activity 27 days in advance. Final report, 6 Aug 86-16 Nov 90

    SciTech Connect

    Akasofu, S.I.; Lee, L.H.

    1991-02-01

    The modern geomagnetic storm prediction scheme should be based on a numerical simulation method, rather than on a statistical result. Furthermore, the scheme should be able to predict the geomagnetic storm indices, such as the Dst and AE indices, as a function of time. By recognizing that geomagnetic storms are powered by the solar wind-magnetosphere generator and that its power is given in terms of the solar wind speed, the interplanetary magnetic field (IMF) magnitude and polar angle, the authors have made a major advance in predicting both flare-induced storms and recurrent storms. Furthermore, it is demonstrated that the prediction scheme can be calibrated using the interplanetary scintillation (IPS) observation, when the solar disturbance advances about half-way to the earth. It is shown, however, that we are still far from a reliable prediction scheme. The prediction of the IMF polar angle requires future advance in understanding characteristics of magnetic clouds.

  2. Comment on “Symplectic integration of magnetic systems”: A proof that the Boris algorithm is not variational

    SciTech Connect

    Ellison, C. L.; Burby, J. W.; Qin, H.

    2015-11-01

    One popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1], [2], [3] and [4] is the Boris algorithm. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1] and [5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Moreover, without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of the conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6], [7], [8], [9], [10] and [11].

  3. Comment on “Symplectic integration of magnetic systems”: A proof that the Boris algorithm is not variational

    DOE PAGESBeta

    Ellison, C. L.; Burby, J. W.; Qin, H.

    2015-11-01

    One popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1], [2], [3] and [4] is the Boris algorithm. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1] and [5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Moreover, without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of themore » conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6], [7], [8], [9], [10] and [11].« less

  4. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  5. Numerical quadrature for slab geometry transport algorithms

    SciTech Connect

    Hennart, J.P.; Valle, E. del

    1995-12-31

    In recent papers, a generalized nodal finite element formalism has been presented for virtually all known linear finite difference approximations to the discrete ordinates equations in slab geometry. For a particular angular directions {mu}, the neutron flux {Phi} is approximated by a piecewise function Oh, which over each space interval can be polynomial or quasipolynomial. Here we shall restrict ourselves to the polynomial case. Over each space interval, {Phi} is a polynomial of degree k, interpolating parameters given by in the continuous and discontinuous cases, respectively. The angular flux at the left and right ends and the k`th Legendre moment of {Phi} over the cell considered are represented as.

  6. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  7. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  8. Numerical modeling of late Glacial Laurentide advance of ice across Hudson Strait: Insights into terrestrial and marine geology, mass balance, and calving flux

    USGS Publications Warehouse

    Pfeffer, W.T.; Dyurgerov, M.; Kaplan, M.; Dwyer, J.; Sassolas, C.; Jennings, A.; Raup, B.; Manley, W.

    1997-01-01

    A time-dependent finite element model was used to reconstruct the advance of ice from a late Glacial dome on northern Quebec/Labrador across Hudson Strait to Meta Incognita Peninsula (Baffin Island) and subsequently to the 9.9-9.6 ka 14C Gold Cove position on Hall Peninsula. Terrestrial geological and geophysical information from Quebec and Labrador was used to constrain initial and boundary conditions, and the model results are compared with terrestrial geological information from Baffin Island and considered in the context of the marine event DC-0 and the Younger Dryas cooling. We conclude that advance across Hudson Strait from Ungava Bay to Baffin Island is possible using realistic glacier physics under a variety of reasonable boundary conditions. Production of ice flux from a dome centered on northeastern Quebec and Labrador sufficient to deliver geologically inferred ice thickness at Gold Cove (Hall Peninsula) appears to require extensive penetration of sliding south from Ungava Bay. The discharge of ice into the ocean associated with advance and retreat across Hudson Strait does not peak at a time coincident with the start of the Younger Dryas and is less than minimum values proposed to influence North Atlantic thermohaline circulation; nevertheless, a significant fraction of freshwater input to the North Atlantic may have been provided abruptly and at a critical time by this event.

  9. Addressing Global Questions With in situ Measurements: Defining Accuracy Using Both Advances in Data Reduction Algorithms and Developments in Laser Systems

    NASA Astrophysics Data System (ADS)

    Engel, G. S.; Anderson, J. G.

    2003-12-01

    Data reduction and data analysis algorithms can introduce statistically significant systematic bias and loss of precision in results of both satellite and airborne in situ measurement results. Because data from many instruments must be used to create a global mapping, reducing these hidden systematic errors in in situ instrumentation is crucial to validating satellite data and to integrating in situ results into global climate models. Biases in the in situ measurements must be eliminated before the result can be considered accurate. Additionally, inter-comparison among in situ instrumentation requires careful review of all collection, reduction and analysis algorithms to eliminate differences in temporal and spatial offsets as well as extrapolation to the appropriate timescales to compare instruments. Typically, the in situ community does not archive raw data nor publish retrieval and reduction algorithms in such a way that they can be verified and reviewed; however the global nature of current atmospheric questions requires this change. In flight inter-comparisons between results obtained from related instruments are necessary but not sufficient to resolve differences in measurements and in uncertainties; details of analysis techniques must also be compared to ensure the agreement or disagreement between instruments is well-understood. Simply observing agreement or disagreement is not sufficient. Having documented, traceable paths to compare laboratory calibrations and analysis to flight data will lead to improvements in instrumentation and retrieval algorithms, thereby improving the credibility of atmospheric data. We will show raw data from Cavity-Enhanced Absorption Spectrometers using Integrated Cavity Output Spectroscopy (ICOS) and Cavity Ringdown Spectroscopy (CRDS) and demonstrate statistically significant improvement in second generation fitting and retrieval algorithms. Improved lineshape models and singular value decomposition of the baseline have

  10. Note on symmetric BCJ numerator

    NASA Astrophysics Data System (ADS)

    Fu, Chih-Hao; Du, Yi-Jian; Feng, Bo

    2014-08-01

    We present an algorithm that leads to BCJ numerators satisfying manifestly the three properties proposed by Broedel and Carrasco in [42]. We explicitly calculate the numerators at 4, 5 and 6-points and show that the relabeling property is generically satisfied.

  11. UNEDF: Advanced Scientific Computing Transforms the Low-Energy Nuclear Many-Body Problem

    SciTech Connect

    Stoitsov, Mario; Nam, Hai Ah; Nazarewicz, Witold; Bulgac, Aurel; Hagen, Gaute; Kortelainen, E. M.; Pei, Junchen; Roche, K. J.; Schunck, N.; Thompson, I.; Vary, J. P.; Wild, S.

    2011-01-01

    The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper illustrates significant milestones accomplished by UNEDF through integration of the theoretical approaches, advanced numerical algorithms, and leadership class computational resources.

  12. Variational Algorithms for Test Particle Trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  13. Development of Design Technology on Thermal-Hydraulic Performance in Tight-Lattice Rod Bundles: III - Numerical Evaluation of Fluid Mixing Phenomena using Advanced Interface-Tracking Method -

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Nagayoshi, Takuji; Takase, Kazuyuki; Akimoto, Hajime

    Thermal-hydraulic design of the current boiling water reactor (BWR) is performed by correlations with empirical results of actual-size tests. However, for the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) core, an actual size test of an embodiment of its design is required to confirm or modify such correlations. Development of a method that enables the thermal-hydraulic design of nuclear reactors without these actual size tests is desired, because these tests take a long time and entail great cost. For this reason we developed an advanced thermal-hydraulic design method for FLWRs using innovative two-phase flow simulation technology. In this study, detailed Two-Phase Flow simulation code using advanced Interface Tracking method: TPFIT is developed to calculate the detailed information of the two-phase flow. We tried to verify the TPFIT code by comparing it with the 2-channel air-water and steam-water mixing experimental results. The predicted result agrees well the observed results and bubble dynamics through the gap and cross flow behavior could be effectively predicted by the TPFIT code, and pressure difference between fluid channels is responsible for the fluid mixing.

  14. Efficient procedures for the numerical simulation of mid-size RNA kinetics

    PubMed Central

    2012-01-01

    Motivation Methods for simulating the kinetic folding of RNAs by numerically solving the chemical master equation have been developed since the late 90's, notably the programs Kinfold and Treekin with Barriers that are available in the Vienna RNA package. Our goal is to formulate extensions to the algorithms used, starting from the Gillespie algorithm, that will allow numerical simulations of mid-size (~ 60–150 nt) RNA kinetics in some practical cases where numerous distributions of folding times are desired. These extensions can contribute to analyses and predictions of RNA folding in biologically significant problems. Results By describing in a particular way the reduction of numerical simulations of RNA folding kinetics into the Gillespie stochastic simulation algorithm for chemical reactions, it is possible to formulate extensions to the basic algorithm that will exploit memoization and parallelism for efficient computations. These can be used to advance forward from the small examples demonstrated to larger examples of biological interest. Software The implementation that is described and used for the Gillespie algorithm is freely available by contacting the authors, noting that the efficient procedures suggested may also be applicable along with Vienna's Kinfold. PMID:22958879

  15. Advanced computing

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Advanced concepts in hardware, software and algorithms are being pursued for application in next generation space computers and for ground based analysis of space data. The research program focuses on massively parallel computation and neural networks, as well as optical processing and optical networking which are discussed under photonics. Also included are theoretical programs in neural and nonlinear science, and device development for magnetic and ferroelectric memories.

  16. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  17. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  18. 2D SEDFLUX 1.0C:. an advanced process-response numerical model for the fill of marine sedimentary basins

    NASA Astrophysics Data System (ADS)

    Syvitski, James P. M.; Hutton, Eric W. H.

    2001-07-01

    Numerical simulators of the dynamics of strata formation of continental margins fuse information from the atmosphere, ocean and regional geology. Such models can provide information for areas and times for which actual measurements are not available, or for when purely statistical estimates are not adequate by themselves. SEDFLUX is such a basin-fill model, written in ANSI-standard C, able to simulate the delivery of sediment and their accumulation over time scales of tens of thousands of years. SEDFLUX includes the effects of sea-level fluctuations, river floods, ocean storms, and other relevant environmental factors (climate trends, random catastrophic events), at a time step (daily to yearly) that is sensitive to short-term variations of the seafloor. SEDFLUX combines individual process-response models into one fully interactive model, delivering a multi-sized sediment load onto and across a continental margin, including sediment redistribution by (1) river mouth dynamics, (2) buoyant surface plumes, (3) hyperpycnal flows, (4) ocean storms, (5) slope instabilities, (6) turbidity currents, and (7) debris flows. The model allows for the deposit to compact, to undergo tectonic processes (faults, uplift) and isostatic subsidence from the sediment load. The modeled architecture has a typical vertical resolution of 1-25 cm, and a typical horizontal resolution of between 1 and 100 m.

  19. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  20. Frontiers in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Evans, Charles R.; Finn, Lee S.; Hobill, David W.

    2011-06-01

    Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics

  1. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  2. A Numerical Study of Boson Star Binaries

    NASA Astrophysics Data System (ADS)

    Mundim, Bruno C.

    2010-02-01

    This thesis describes a numerical study of binary boson stars within the context of an approximation to general relativity. The approximation we adopt places certain restrictions on the dynamical variables of general relativity (conformal flatness of the 3-metric), and on the time-slicing of the spacetime (maximal slicing). The resulting modeling problem requires the solution of a coupled nonlinear system of 4 hyperbolic, and 5 elliptic partial differential equations (PDEs) in three space dimensions and time. We approximately solve this system as an initial-boundary value problem, using finite difference techniques and well known, computationally efficient numerical algorithms such as the multigrid method in the case of the elliptic equations. Careful attention is paid to the issue of code validation, and a key part of the thesis is the demonstration that, as the basic scale of finite difference discretization is reduced, our numerical code generates results that converge to a solution of the continuum system of PDEs as desired. The thesis concludes with a discussion of results from some initial explorations of the orbital dynamics of boson star binaries. In particular, we describe calculations in which motion of such a binary is followed for more than two orbital periods, which is a significant advance over previous studies. We also present results from computations in which the boson stars merge, and where there is evidence for black hole formation.

  3. Theory and algorithms of an efficient fringe analysis technology for automatic measurement applications.

    PubMed

    Juarez-Salazar, Rigoberto; Guerrero-Sanchez, Fermin; Robledo-Sanchez, Carlos

    2015-06-10

    Some advances in fringe analysis technology for phase computing are presented. A full scheme for phase evaluation, applicable to automatic applications, is proposed. The proposal consists of: a fringe-pattern normalization method, Fourier fringe-normalized analysis, generalized phase-shifting processing for inhomogeneous nonlinear phase shifts and spatiotemporal visibility, and a phase-unwrapping method by a rounding-least-squares approach. The theoretical principles of each algorithm are given. Numerical examples and an experimental evaluation are presented. PMID:26192836

  4. Final Report-Optimization Under Uncertainty and Nonconvexity: Algorithms and Software

    SciTech Connect

    Jeff Linderoth

    2008-10-10

    The goal of this research was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems.

  5. Investigation of an Immunoassay with Broad Specificity to Quinolone Drugs by Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Data Sets and Advanced Quantitative Structure-Activity Relationship Analysis.

    PubMed

    Chen, Jiahong; Lu, Ning; Shen, Xing; Tang, Qiushi; Zhang, Chijian; Xu, Jun; Sun, Yuanming; Huang, Xin-An; Xu, Zhenlin; Lei, Hongtao

    2016-04-01

    A polyclonal antibody against the quinolone drug pazufloxacin (PAZ) but with surprisingly broad specificity was raised to simultaneously detect 24 quinolones (QNs). The developed competitive indirect enzyme-linked immunosorbent assay (ciELISA) exhibited limits of detection (LODs) for the 24 QNs ranging from 0.45 to 15.16 ng/mL, below the maximum residue levels (MRLs). To better understand the obtained broad specificity, a genetic algorithm with linear assignment of hypermolecular alignment of data sets (GALAHAD) was used to generate the desired pharmacophore model and superimpose the QNs, and then advanced comparative molecular field analysis (CoMFA) and advanced comparative molecular similarity indices analysis (CoMSIA) models were employed to study the three-dimensional quantitative structure-activity relationship (3D QSAR) between QNs and the antibody. It was found that the QNs could interact with the antibody with different binding poses, and cross-reactivity was mainly positively correlated with the bulky substructure containing electronegative atom at the 7-position, while it was negatively associated with the large bulky substructure at the 1-position of QNs. PMID:26982746

  6. Efficient multicomponent fuel algorithm

    NASA Astrophysics Data System (ADS)

    Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.

    2003-03-01

    We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.

  7. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  8. Numerical anomalies mimicking physical effects

    NASA Astrophysics Data System (ADS)

    Menikoff, R.

    Numerical simulations of flows with shock waves typically use finite-difference shock-capturing algorithms. These algorithms give a shock a numerical width in order to generate the entropy increase that must occur across a shock wave. For algorithms in conservation form, steady-state shock waves are insensitive to the numerical dissipation because of the Hugoniot jump conditions. However, localized numerical errors occur when shock waves interact. Examples are the 'excess wall heating' in the Noh problem (shock reflected from rigid wall), errors when a shock impacts a material interface or an abrupt change in mesh spacing, and the start-up error from initializing a shock as a discontinuity. This class of anomalies can be explained by the entropy generation that occurs in the transient flow when a shock profile is formed or changed. The entropy error is localized spatially but under mesh refinement does not decrease in magnitude. Similar effects have been observed in shock tube experiments with partly dispersed shock waves. In this case, the shock has a physical width due to a relaxation process. An entropy anomaly from a transient shock interaction is inherent in the structure of the conservation equations for fluid flow. The anomaly can be expected to occur whenever heat conduction can be neglected and a shock wave has a non-zero width, whether the width is physical or numerical. Thus, the numerical anomaly from an artificial shock width mimics a real physical effect.

  9. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  10. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  11. Automatic numerical integration methods for Feynman integrals through 3-loop

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.

    2015-05-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.

  12. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  13. Advanced Numerical Modeling of Turbulent Atmospheric Flows

    NASA Astrophysics Data System (ADS)

    Kühnlein, Christian; Dörnbrack, Andreas; Gerz, Thomas

    The present chapter introduces the method of computational simulation to predict and study turbulent atmospheric flows. This includes a description of the fundamental approach to computational simulation and the practical implementation using the technique of large-eddy simulation. In addition, selected contributions from IPA scientists to computational model development and various examples for applications are given. These examples include homogeneous turbulence, convective boundary layers, heated forest canopy, buoyant thermals, and large-scale flows with baroclinic wave instability.

  14. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  15. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 ; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  16. Fully implicit particle-in-cell algorithm.

    NASA Astrophysics Data System (ADS)

    Kim, Hyung; Chacon, Luis

    2005-10-01

    Most current particle-in-cell (PIC) algorithms employ an explicit approach. Explicit PIC approaches are not only time-step limited for numerical stability, but also grid-intensive due to the so-called finite-grid instability.ootnotetextC. Birdsall and A. Langdon, Plasma physics via computer simulation, McGraw-Hill, New York, 1985 As a result, explicit PIC methods are very hardware-intensive, and become prohibitive for system scale simulations even with modern supercomputers. To avoid such stringent time-step and grid-size requirements, the implicit moment method PIC approach (IM-PIC) was developed.ootnotetextJ. Brackbill and D. Forslund, J. Comput. Phys. 46, 271 (1982). IM-PIC advances the required moments (density, current) using Chapman-Enskop-based fluid equations, and then advances the particles with such moments. While being able to employ much larger time steps and grid spacings than explicit PIC methods, IM-PIC is limited in that the time-advanced moments and the particle moments are inconsistent, resulting in lack of energy conservation. To remedy this, we propose here a fully implicit, fully nonlinear PIC approach (FI-PIC) where the particles and the moments are converged simultaneously using Newton-Krylov techniques. This guarantees the consistency of moments and particles upon convergence. We will demonstrate the feasibility of the concept using a purely electrostatic Vlasov-Poisson model, and will show its effectiveness with several fully kinetic examples.

  17. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. I - The dynamics of time discretization and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1991-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  18. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. Part 1: The ODE connection and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1990-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  19. Numerical methods for engine-airframe integration

    SciTech Connect

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.

  20. Extending HPF for advanced data parallel applications

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Zima, Hans

    1994-01-01

    The stated goal of High Performance Fortran (HPF) was to 'address the problems of writing data parallel programs where the distribution of data affects performance'. After examining the current version of the language we are led to the conclusion that HPF has not fully achieved this goal. While the basic distribution functions offered by the language - regular block, cyclic, and block cyclic distributions - can support regular numerical algorithms, advanced applications such as particle-in-cell codes or unstructured mesh solvers cannot be expressed adequately. We believe that this is a major weakness of HPF, significantly reducing its chances of becoming accepted in the numeric community. The paper discusses the data distribution and alignment issues in detail, points out some flaws in the basic language, and outlines possible future paths of development. Furthermore, we briefly deal with the issue of task parallelism and its integration with the data parallel paradigm of HPF.

  1. Numerical inversion of finite Toeplitz matrices and vector Toeplitz matrices

    NASA Technical Reports Server (NTRS)

    Bareiss, E. H.

    1969-01-01

    Numerical technique increases the efficiencies of the numerical methods involving Toeplitz matrices by reducing the number of multiplications required by an N-order Toeplitz matrix from N-cubed to N-squared multiplications. Some efficient algorithms are given.

  2. Variable Selection using MM Algorithms

    PubMed Central

    Hunter, David R.; Li, Runze

    2009-01-01

    Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786

  3. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  4. Exploring the use of numerical relativity waveforms in burst analysis of precessing black hole mergers

    SciTech Connect

    Fischetti, Sebastian; Cadonati, Laura; Mohapatra, Satyanarayan R. P.; Healy, James; London, Lionel; Shoemaker, Deirdre

    2011-02-15

    Recent years have witnessed tremendous progress in numerical relativity and an ever improving performance of ground-based interferometric gravitational wave detectors. In preparation for the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO) and a new era in gravitational wave astronomy, the numerical relativity and gravitational wave data analysis communities are collaborating to ascertain the most useful role for numerical relativity waveforms in the detection and characterization of binary black hole coalescences. In this paper, we explore the detectability of equal mass, merging black hole binaries with precessing spins and total mass M{sub T}(set-membership sign)[80,350]M{sub {center_dot}}, using numerical relativity waveforms and templateless search algorithms designed for gravitational wave bursts. In particular, we present a systematic study using waveforms produced by the MayaKranc code that are added to colored, Gaussian noise and analyzed with the Omega burst search algorithm. Detection efficiency is weighed against the orientation of one of the black-hole's spin axes. We find a strong correlation between the detection efficiency and the radiated energy and angular momentum, and that the inclusion of the l=2, m={+-}1, 0 modes, at a minimum, is necessary to account for the full dynamics of precessing systems.

  5. Efficient Nonlinear Programming Algorithms for Chemical Process Control and Operations

    NASA Astrophysics Data System (ADS)

    Biegler, Lorenz T.

    Optimization is applied in numerous areas of chemical engineering including the development of process models from experimental data, design of process flowsheets and equipment, planning and scheduling of chemical process operations, and the analysis of chemical processes under uncertainty and adverse conditions. These off-line tasks require the solution of nonlinear programs (NLPs) with detailed, large-scale process models. Recently, these tasks have been complemented by time-critical, on-line optimization problems with differential-algebraic equation (DAE) process models that describe process behavior over a wide range of operating conditions, and must be solved sufficiently quickly. This paper describes recent advances in this area especially with dynamic models. We outline large-scale NLP formulations and algorithms as well as NLP sensitivity for on-line applications, and illustrate these advances on a commercial-scale low density polyethylene (LDPE) process.

  6. Final Progress Report submitted via the DOE Energy Link (E-Link) in June 2009 [Collaborative Research: Decadal-to-Centennial Climate & Climate Change Studies with Enhanced Variable and Uniform Resolution GCMs Using Advanced Numerical Techniques

    SciTech Connect

    Fox-Rabinovitz, M; Cote, J

    2009-10-09

    The joint U.S-Canadian project has been devoted to: (a) decadal climate studies using developed state-of-the-art GCMs (General Circulation Models) with enhanced variable and uniform resolution; (b) development and implementation of advanced numerical techniques; (c) research in parallel computing and associated numerical methods; (d) atmospheric chemistry experiments related to climate issues; (e) validation of regional climate modeling strategies for nested- and stretched-grid models. The variable-resolution stretched-grid (SG) GCMs produce accurate and cost-efficient regional climate simulations with mesoscale resolution. The advantage of the stretched grid approach is that it allows us to preserve the high quality of both global and regional circulations while providing consistent interactions between global and regional scales and phenomena. The major accomplishment for the project has been the successful international SGMIP-1 and SGMIP-2 (Stretched-Grid Model Intercomparison Project, phase-1 and phase-2) based on this research developments and activities. The SGMIP provides unique high-resolution regional and global multi-model ensembles beneficial for regional climate modeling and broader modeling community. The U.S SGMIP simulations have been produced using SciDAC ORNL supercomputers. The results of the successful SGMIP multi-model ensemble simulations of the U.S. climate are available at the SGMIP web site (http://essic.umd.edu/~foxrab/sgmip.html) and through the link to the WMO/WCRP/WGNE web site: http://collaboration.cmc.ec.gc.ca/science/wgne. Collaborations with other international participants M. Deque (Meteo-France) and J. McGregor (CSIRO, Australia) and their centers and groups have been beneficial for the strong joint effort, especially for the SGMIP activities. The WMO/WCRP/WGNE endorsed the SGMIP activities in 2004-2008. This project reflects a trend in the modeling and broader communities to move towards regional and sub-regional assessments and

  7. Probability tree algorithm for general diffusion processes

    NASA Astrophysics Data System (ADS)

    Ingber, Lester; Chen, Colleen; Mondescu, Radu Paul; Muzzall, David; Renedo, Marco

    2001-11-01

    Motivated by path-integral numerical solutions of diffusion processes, PATHINT, we present a tree algorithm, PATHTREE, which permits extremely fast accurate computation of probability distributions of a large class of general nonlinear diffusion processes.

  8. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  9. Spurious Numerical Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1995-01-01

    Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.

  10. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  11. Accelerating Numerical Calculation on the Cray XMT

    SciTech Connect

    Scherrer, Chad; Shippert, Timothy R.; Marquez, Andres

    2009-05-25

    The Cray XMT provides hardware support for parallel algorithms that would be communication- or memory-bound on other machines. Unfortunately, even if an algorithm meets these criteria, performance suffers if the algorithm is too numerically intensive. We present a lookup-based approach that achieves a significant performance advantage over explicit calculation. We describe an approach to balancing memory bandwidth against on-chip floating point capabilities, leading to further speedup. Finally, we provide table lookup algorithms for a number of common functions.

  12. Numerical computation of transonic flow governed by the full-potential equation

    NASA Technical Reports Server (NTRS)

    Holst, T. L.

    1983-01-01

    Numerical solution techniques for solving transonic flow fields governed by the full potential equation are discussed. In a general sense relaxation schemes suitable for the numerical solution of elliptic partial differential equations are presented and discussed with emphasis on transonic flow applications. The presentation can be divided into two general categories: An introductory treatment of the basic concepts associated with the numerical solution of elliptic partial differential equations and a more advanced treatment of current procedures used to solve the full potential equation for transonic flow fields. The introductory material is presented for completeness and includes a brief introduction (Chapter 1), governing equations (Chapter 2), classical relaxation schemes (Chapter 3), and early concepts regarding transonic full potential equation algorithms (Chapter 4).

  13. Sequential and Parallel Algorithms for Spherical Interpolation

    NASA Astrophysics Data System (ADS)

    De Rossi, Alessandra

    2007-09-01

    Given a large set of scattered points on a sphere and their associated real values, we analyze sequential and parallel algorithms for the construction of a function defined on the sphere satisfying the interpolation conditions. The algorithms we implemented are based on a local interpolation method using spherical radial basis functions and the Inverse Distance Weighted method. Several numerical results show accuracy and efficiency of the algorithms.

  14. Cuba: Multidimensional numerical integration library

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  15. MFIX documentation numerical technique

    SciTech Connect

    Syamlal, M.

    1998-01-01

    MFIX (Multiphase Flow with Interphase eXchanges) is a general-purpose hydrodynamic model for describing chemical reactions and heat transfer in dense or dilute fluid-solids flows, which typically occur in energy conversion and chemical processing reactors. The calculations give time-dependent information on pressure, temperature, composition, and velocity distributions in the reactors. The theoretical basis of the calculations is described in the MFIX Theory Guide. Installation of the code, setting up of a run, and post-processing of results are described in MFIX User`s manual. Work was started in April 1996 to increase the execution speed and accuracy of the code, which has resulted in MFIX 2.0. To improve the speed of the code the old algorithm was replaced by a more implicit algorithm. In different test cases conducted the new version runs 3 to 30 times faster than the old version. To increase the accuracy of the computations, second order accurate discretization schemes were included in MFIX 2.0. Bubbling fluidized bed simulations conducted with a second order scheme show that the predicted bubble shape is rounded, unlike the (unphysical) pointed shape predicted by the first order upwind scheme. This report describes the numerical technique used in MFIX 2.0.

  16. Numerical and measured data from the 3D salt canopy physical modeling project

    SciTech Connect

    Bradley, C.; House, L.; Fehler, M.; Pearson, J.; TenCate, J.; Wiley, R.

    1997-11-01

    The evolution of salt structures in the Gulf of Mexico have been shown to provide a mechanism for the trapping of significant hydrocarbon reserves. Most of these structures have complex geometries relative to the surrounding sedimentary layers. This aspect in addition to high velocities within the salt tend to scatter and defocus seismic energy and make imaging of subsalt lithology extremely difficult. An ongoing program the SEG/EAEG modeling project (Aminzadeh et al. 1994a: Aminzadeh et al. 1994b: Aminzadeh et al. 1995), and a follow-up project funded as part of the Advanced Computational Technology Initiative (ACTI) (House et al. 1996) have sought to investigate problems with imaging beneath complex salt structures using numerical modeling and more recently, construction of a physical model patterned after the numerical subsalt model (Wiley and McKnight. 1996). To date, no direct comparison of the numerical and physical aspects of these models has been attempted. We present the results of forward modeling a numerical realization of the 3D salt canopy physical model with the French Petroleum Institute (IFP) acoustic finite difference algorithm used in the numerical subsalt tests. We compare the results from the physical salt canopy model, the acoustic modeling of the physical/numerical model and the original numerical SEG/EAEG Salt Model. We will be testing the sensitivity of migration to the presence of converted shear waves and acquisition geometry.

  17. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  18. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  19. A Comparison of Two Skip Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Rea, Jeremy R.; Putnam, Zachary R.

    2007-01-01

    The Orion capsule vehicle will have a Lift-to-Drag ratio (L/D) of 0.3-0.35. For an Apollo-like direct entry into the Earth's atmosphere from a lunar return trajectory, this L/D will give the vehicle a maximum range of about 2500 nm and a maximum crossrange of 216 nm. In order to y longer ranges, the vehicle lift must be used to loft the trajectory such that the aerodynamic forces are decreased. A Skip-Trajectory results if the vehicle leaves the sensible atmosphere and a second entry occurs downrange of the atmospheric exit point. The Orion capsule is required to have landing site access (either on land or in water) inside the Continental United States (CONUS) for lunar returns anytime during the lunar month. This requirement means the vehicle must be capable of flying ranges of at least 5500 nm. For the L/D of the vehicle, this is only possible with the use of a guided Skip-Trajectory. A skip entry guidance algorithm is necessary to achieve this requirement. Two skip entry guidance algorithms have been developed: the Numerical Skip Entry Guidance (NSEG) algorithm was developed at NASA/JSC and PredGuid was developed at Draper Laboratory. A comparison of these two algorithms will be presented in this paper. Each algorithm has been implemented in a high-fidelity, 6 degree-of-freedom simulation called the Advanced NASA Technology Architecture for Exploration Studies (ANTARES). NASA and Draper engineers have completed several monte carlo analyses in order to compare the performance of each algorithm in various stress states. Each algorithm has been tested for entry-to-target ranges to include direct entries and skip entries of varying length. Dispersions have been included on the initial entry interface state, vehicle mass properties, vehicle aerodynamics, atmosphere, and Reaction Control System (RCS). Performance criteria include miss distance to the target, RCS fuel usage, maximum g-loads and heat rates for the first and second entry, total heat load, and control

  20. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  1. Advancing MODFLOW Applying the Derived Vector Space Method

    NASA Astrophysics Data System (ADS)

    Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.

    2015-12-01

    The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  2. An accurate product SVD (singular value decomposition) algorithm

    SciTech Connect

    Bojanczyk, A.W.; Luk, F.T. . School of Electrical Engineering); Ewerbring, M. ); Van Dooren, P. )

    1990-01-01

    In this paper, we propose a new algorithm for computing a singular value decomposition of a product of three matrices. We show that our algorithm is numerically desirable in that all relevant residual elements will be numerically small. 12 refs., 1 tab.

  3. The representation of numerical magnitude

    PubMed Central

    Brannon, Elizabeth M

    2006-01-01

    The combined efforts of many fields are advancing our understanding of how number is represented. Researchers studying numerical reasoning in adult humans, developing humans and non-human animals are using a suite of behavioral and neurobiological methods to uncover similarities and differences in how each population enumerates and compares quantities to identify the neural substrates of numerical cognition. An important picture emerging from this research is that adult humans share with non-human animals a system for representing number as language-independent mental magnitudes and that this system emerges early in development. PMID:16546373

  4. Stabilizing the Richardson eigenvector algorithm by controlling chaos

    SciTech Connect

    He, S.

    1997-03-01

    By viewing the operations of the Richardson purification algorithm as a discrete time dynamical process, we propose a method to overcome the instability of this eigenvector algorithm by controlling chaos. We present theoretical analysis and numerical results on the behavior and performance of the stabilized algorithm. {copyright} {ital 1997 American Institute of Physics.}

  5. An algorithm for the automatic synchronization of Omega receivers

    NASA Technical Reports Server (NTRS)

    Stonestreet, W. M.; Marzetta, T. L.

    1977-01-01

    The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.

  6. A quadratic weight selection algorithm. [for optimal flight control

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.

    1981-01-01

    A new numerical algorithm is presented which determines a positive semi-definite state weighting matrix in the linear-quadratic optimal control design problem. The algorithm chooses the weighting matrix by placing closed-loop eigenvalues and eigenvectors near desired locations using optimal feedback gains. A simplified flight control design example is used to illustrate the algorithms capabilities.

  7. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    This paper presents a reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. The algorithm has been included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  8. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  9. Numerical taxonomy on data: Experimental results

    SciTech Connect

    Cohen, J.; Farach, M.

    1997-12-01

    The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.

  10. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method. PMID:19840985

  11. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  12. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    SciTech Connect

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  13. Propagation of numerical noise in particle-in-cell tracking

    NASA Astrophysics Data System (ADS)

    Kesting, Frederik; Franchetti, Giuliano

    2015-11-01

    Particle-in-cell (PIC) is the most used algorithm to perform self-consistent tracking of intense charged particle beams. It is based on depositing macroparticles on a grid, and subsequently solving on it the Poisson equation. It is well known that PIC algorithms occupy intrinsic limitations as they introduce numerical noise. Although not significant for short-term tracking, this becomes important in simulations for circular machines over millions of turns as it may induce artificial diffusion of the beam. In this work, we present a modeling of numerical noise induced by PIC algorithms, and discuss its influence on particle dynamics. The combined effect of particle tracking and noise created by PIC algorithms leads to correlated or decorrelated numerical noise. For decorrelated numerical noise we derive a scaling law for the simulation parameters, allowing an estimate of artificial emittance growth. Lastly, the effect of correlated numerical noise is discussed, and a mitigation strategy is proposed.

  14. The Texas Children's Medication Algorithm Project: Revision of the Algorithm for Pharmacotherapy of Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly

    2006-01-01

    Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…

  15. Numerical Methods for Forward and Inverse Problems in Discontinuous Media

    SciTech Connect

    Chartier, Timothy P.

    2011-03-08

    The research emphasis under this grant's funding is in the area of algebraic multigrid methods. The research has two main branches: 1) exploring interdisciplinary applications in which algebraic multigrid can make an impact and 2) extending the scope of algebraic multigrid methods with algorithmic improvements that are based in strong analysis.The work in interdisciplinary applications falls primarily in the field of biomedical imaging. Work under this grant demonstrated the effectiveness and robustness of multigrid for solving linear systems that result from highly heterogeneous finite element method models of the human head. The results in this work also give promise to medical advances possible with software that may be developed. Research to extend the scope of algebraic multigrid has been focused in several areas. In collaboration with researchers at the University of Colorado, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, the PI developed an adaptive multigrid with subcycling via complementary grids. This method has very cheap computing costs per iterate and is showing promise as a preconditioner for conjugate gradient. Recent work with Los Alamos National Laboratory concentrates on developing algorithms that take advantage of the recent advances in adaptive multigrid research. The results of the various efforts in this research could ultimately have direct use and impact to researchers for a wide variety of applications, including, astrophysics, neuroscience, contaminant transport in porous media, bi-domain heart modeling, modeling of tumor growth, and flow in heterogeneous porous media. This work has already led to basic advances in computational mathematics and numerical linear algebra and will continue to do so into the future.

  16. Process simulation for advanced composites production

    SciTech Connect

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coating techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.

  17. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  18. Neural network temperature and moisture retrieval algorithm validation for AIRS/AMSU and CrIS/ATMS

    NASA Astrophysics Data System (ADS)

    Milstein, Adam B.; Blackwell, William J.

    2016-02-01

    We present comprehensive validation results for the recently introduced neural network technique for retrieving vertical profiles of atmospheric temperature and water vapor from spaceborne microwave and hyperspectral infrared sounding instruments. This technique is currently in operational use as the first guess for the NASA Atmospheric Infrared Sounder (AIRS) Science Team Version 6 retrieval algorithm. The validation incorporates a variety of data sources, independent from the algorithm training set, as ground truth, including global numerical weather analyses generated by the European Center for Medium-Range Weather Forecasts, synoptic radiosonde measurements, and radiosondes dedicated for validation. The results demonstrate significant performance improvements over the previous AIRS/advanced microwave sounding unit (AMSU) operational sounding retrievals in both retrieval error and also show comparable vertical resolution. We also present initial neural network retrieval results using measurements from the Cross-Track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) currently flying on the Suomi National Polar-orbiting Partnership satellite.

  19. Spatial search algorithms on Hanoi networks

    NASA Astrophysics Data System (ADS)

    Marquezino, Franklin de Lima; Portugal, Renato; Boettcher, Stefan

    2013-01-01

    We use the abstract search algorithm and its extension due to Tulsi to analyze a spatial quantum search algorithm that finds a marked vertex in Hanoi networks of degree 4 faster than classical algorithms. We also analyze the effect of using non-Groverian coins that take advantage of the small-world structure of the Hanoi networks. We obtain the scaling of the total cost of the algorithm as a function of the number of vertices. We show that Tulsi's technique plays an important role to speed up the searching algorithm. We can improve the algorithm's efficiency by choosing a non-Groverian coin if we do not implement Tulsi's method. Our conclusions are based on numerical implementations.

  20. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  1. Numerical vorticity creation based on impulse conservation.

    PubMed Central

    Summers, D M; Chorin, A J

    1996-01-01

    The problem of creating solenoidal vortex elements to satisfy no-slip boundary conditions in Lagrangian numerical vortex methods is solved through the use of impulse elements at walls and their subsequent conversion to vortex loops. The algorithm is not uniquely defined, due to the gauge freedom in the definition of impulse; the numerically optimal choice of gauge remains to be determined. Two different choices are discussed, and an application to flow past a sphere is sketched. PMID:11607636

  2. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  3. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  4. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  5. Numerical propagator through PIAA optics

    NASA Astrophysics Data System (ADS)

    Pueyo, Laurent; Shaklan, Stuart; Give'On, Amir; Krist, John

    2009-08-01

    In this communication we address two outstanding issues pertaining the modeling of PIAA coronagraphs, accurate numerical propagation of edge effects and fast propagation of mid spatial frequencies for wavefront control. In order to solve them, we first derive a quadratic approximation of the Huygens wavelets that allows us to develop an angular spectrum propagator for pupil remapping. Using this result we introduce an independent method to verify the ultimate contrast floor, due to edge propagation effects, of PIAA units currently being tested in various testbeds. We then delve into the details of a novel fast algorithm, based on the recognition that angular spectrum computations with a pre-apodised system are computationally light. When used for the propagation of mid spatial frequencies, such a fast propagator will ultimately allow us to develop robust wavefront control algorithms with DMs located before the pupil remapping mirrors.

  6. Numerical study of a quasi-hydrodynamic system of equations for flow computation at small mach numbers

    NASA Astrophysics Data System (ADS)

    Balashov, V. A.; Savenkov, E. B.

    2015-10-01

    The applicability of numerical algorithms based on a quasi-hydrodynamic system of equations for computing viscous heat-conducting compressible gas flows at Mach numbers M = 10-2-10-1 is studied numerically. The numerical algorithm is briefly described, and the results obtained for a number of two- and three-dimensional test problems are presented and compared with earlier numerical data.

  7. Advanced rotorcraft control using parameter optimization

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters is presented. The algorithm is part of a design algorithm for an optimal linear dynamic output feedback controller that minimizes a finite time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed loop eigensystem. This approach through the use of a accurate Pade series approximation does not require the closed loop system matrix to be diagonalizable. The algorithm has been included in a control design package for optimal robust low order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed loop system under an arbitrary controller design initialization and during the numerical search.

  8. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  9. Implementation and evaluation of two helical CT reconstruction algorithms in CIVA

    NASA Astrophysics Data System (ADS)

    Banjak, H.; Costin, M.; Vienne, C.; Kaftandjian, V.

    2016-02-01

    The large majority of industrial CT systems reconstruct the 3D volume by using an acquisition on a circular trajec-tory. However, when inspecting long objects which are highly anisotropic, this scanning geometry creates severe artifacts in the reconstruction. For this reason, the use of an advanced CT scanning method like helical data acquisition is an efficient way to address this aspect known as the long-object problem. Recently, several analytically exact and quasi-exact inversion formulas for helical cone-beam reconstruction have been proposed. Among them, we identified two algorithms of interest for our case. These algorithms are exact and of filtered back-projection structure. In this work we implemented the filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms of Zou and Pan (2004). For performance evaluation, we present a numerical compari-son of the two selected algorithms with the helical FDK algorithm using both complete (noiseless and noisy) and truncated data generated by CIVA (the simulation platform for non-destructive testing techniques developed at CEA).

  10. Development of a multi-objective optimization algorithm using surrogate models for coastal aquifer management

    NASA Astrophysics Data System (ADS)

    Kourakos, George; Mantoglou, Aristotelis

    2013-02-01

    SummaryThe demand for fresh water in coastal areas and islands can be very high due to increased local needs and tourism. A multi-objective optimization methodology is developed, involving minimization of economic and environmental costs while satisfying water demand. The methodology considers desalinization of pumped water and injection of treated water into the aquifer. Variable density aquifer models are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi-objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNNs)]. The surrogate models are trained adaptively during optimization based on a genetic algorithm. In the crossover step, each pair of parents generates a pool of offspring which are evaluated using the fast surrogate model. Then, the most promising offspring are evaluated using the exact numerical model. This procedure eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. The method has important advancements compared to previous methods such as precise evaluation of the Pareto set and alleviation of propagation of errors due to surrogate model approximations. The method is applied to an aquifer in the Greek island of Santorini. The results show that the new MOSA(MNN) algorithm offers significant reduction in computational time compared to previous methods (in the case study it requires only 5% of the time required by other methods). Further, the Pareto solution is better than the solution obtained by alternative algorithms.

  11. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    NASA Astrophysics Data System (ADS)

    Lehe, Rémi; Kirchen, Manuel; Andriyash, Igor A.; Godfrey, Brendan B.; Vay, Jean-Luc

    2016-06-01

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  12. A Hybrid Parallel Preconditioning Algorithm For CFD

    NASA Technical Reports Server (NTRS)

    Barth,Timothy J.; Tang, Wei-Pai; Kwak, Dochan (Technical Monitor)

    1995-01-01

    A new hybrid preconditioning algorithm will be presented which combines the favorable attributes of incomplete lower-upper (ILU) factorization with the favorable attributes of the approximate inverse method recently advocated by numerous researchers. The quality of the preconditioner is adjustable and can be increased at the cost of additional computation while at the same time the storage required is roughly constant and approximately equal to the storage required for the original matrix. In addition, the preconditioning algorithm suggests an efficient and natural parallel implementation with reduced communication. Sample calculations will be presented for the numerical solution of multi-dimensional advection-diffusion equations. The matrix solver has also been embedded into a Newton algorithm for solving the nonlinear Euler and Navier-Stokes equations governing compressible flow. The full paper will show numerous examples in CFD to demonstrate the efficiency and robustness of the method.

  13. Advances in Defocused-Beam Electron-Probe Microanalysis

    NASA Astrophysics Data System (ADS)

    Carpenter, P. K.; Zeigler, R. A.; Jolliff, B. L.

    2010-03-01

    Advances in defocused-beam electron-probe microanalysis are presented, with an Excel VBA algorithm which uses a polynomial alpha factor correction algorithm coupled with a catanorm procedure to correct DBA data.

  14. PyTrilinos: Recent Advances in the Python Interface to Trilinos

    SciTech Connect

    Spotz, William F.

    2012-01-01

    PyTrilinos is a set of Python interfaces to compiled Trilinos packages. This collection supports serial and parallel dense linear algebra, serial and parallel sparse linear algebra, direct and iterative linear solution techniques, algebraic and multilevel preconditioners, nonlinear solvers and continuation algorithms, eigensolvers and partitioning algorithms. Also included are a variety of related utility functions and classes, including distributed I/O, coloring algorithms and matrix generation. PyTrilinos vector objects are compatible with the popular NumPy Python package. As a Python front end to compiled libraries, PyTrilinos takes advantage of the flexibility and ease of use of Python, and the efficiency of the underlying C++, C and Fortran numerical kernels. This paper covers recent, previously unpublished advances in the PyTrilinos package.

  15. Introduction to systolic algorithms and architectures

    SciTech Connect

    Bentley, J.L.; Kung, H.T.

    1983-01-01

    The authors survey the class of systolic special-purpose computer architectures and algorithms, which are particularly well-suited for implementation in very large scale integrated circuitry (VLSI). They give a brief introduction to systolic arrays for a reader with a broad technical background and some experience in using a computer, but who is not necessarily a computer scientist. In addition they briefly survey the technological advances in VLSI that led to the development of systolic algorithms and architectures. 38 references.

  16. Intelligent perturbation algorithms to space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    The limited availability and high cost of crew time and scarce resources make optimization of space operations critical. Advances in computer technology coupled with new iterative search techniques permit the near optimization of complex scheduling problems that were previously considered computationally intractable. Described here is a class of search techniques called Intelligent Perturbation Algorithms. Several scheduling systems which use these algorithms to optimize the scheduling of space crew, payload, and resource operations are also discussed.

  17. On Numerical Methods For Hypersonic Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Yee, H. C.; Sjogreen, B.; Shu, C. W.; Wang, W.; Magin, T.; Hadjadj, A.

    2011-05-01

    Proper control of numerical dissipation in numerical methods beyond the standard shock-capturing dissipation at discontinuities is an essential element for accurate and stable simulation of hypersonic turbulent flows, including combustion, and thermal and chemical nonequilibrium flows. Unlike rapidly developing shock interaction flows, turbulence computations involve long time integrations. Improper control of numerical dissipation from one time step to another would be compounded over time, resulting in the smearing of turbulent fluctuations to an unrecognizable form. Hypersonic turbulent flows around re- entry space vehicles involve mixed steady strong shocks and turbulence with unsteady shocklets that pose added computational challenges. Stiffness of the source terms and material mixing in combustion pose yet other types of numerical challenges. A low dissipative high order well- balanced scheme, which can preserve certain non-trivial steady solutions of the governing equations exactly, may help minimize some of these difficulties. For stiff reactions it is well known that the wrong propagation speed of discontinuities occurs due to the under-resolved numerical solutions in both space and time. Schemes to improve the wrong propagation speed of discontinuities for systems of stiff reacting flows remain a challenge for algorithm development. Some of the recent algorithm developments for direct numerical simulations (DNS) and large eddy simulations (LES) for the subject physics, including the aforementioned numerical challenges, will be discussed.

  18. Ten years of Nature Physics: Numerical models come of age

    NASA Astrophysics Data System (ADS)

    Gull, E.; Millis, A. J.

    2015-10-01

    When Nature Physics celebrated 20 years of high-temperature superconductors, numerical approaches were on the periphery. Since then, new ideas implemented in new algorithms are leading to new insights.

  19. Extremal polynomials and methods of optimization of numerical algorithms

    SciTech Connect

    Lebedev, V I

    2004-10-31

    Chebyshev-Markov-Bernstein-Szegoe polynomials C{sub n}(x) extremal on [-1,1] with weight functions w(x)=(1+x){sup {alpha}}(1- x){sup {beta}}/{radical}(S{sub l}(x)) where {alpha},{beta}=0,1/2 and S{sub l}(x)={pi}{sub k=1}{sup m}(1-c{sub k}T{sub l{sub k}}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w{sup 2}(x)(1-x{sup 2}){sup -1/2}. The parameters of optimal Chebyshev iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshev filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

  20. Extremal polynomials and methods of optimization of numerical algorithms

    NASA Astrophysics Data System (ADS)

    Lebedev, V. I.

    2004-10-01

    Chebyshëv-Markov-Bernstein-Szegö polynomials C_n(x) extremal on \\lbrack -1,1 \\rbrack with weight functions w(x)=(1+x)^\\alpha(1- x)^\\beta/\\sqrt{S_l(x)} where \\alpha,\\beta=0,\\frac12 and S_l(x)=\\prod_{k=1}^m(1-c_kT_{l_k}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w^2(x)(1-x^2)^{-1/2}. The parameters of optimal Chebyshëv iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshëv filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

  1. Numerical algorithms for finite element computations on arrays of microprocessors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1981-01-01

    The development of a multicolored successive over relaxation (SOR) program for the finite element machine is discussed. The multicolored SOR method uses a generalization of the classical Red/Black grid point ordering for the SOR method. These multicolored orderings have the advantage of allowing the SOR method to be implemented as a Jacobi method, which is ideal for arrays of processors, but still enjoy the greater rate of convergence of the SOR method. The program solves a general second order self adjoint elliptic problem on a square region with Dirichlet boundary conditions, discretized by quadratic elements on triangular regions. For this general problem and discretization, six colors are necessary for the multicolored method to operate efficiently. The specific problem that was solved using the six color program was Poisson's equation; for Poisson's equation, three colors are necessary but six may be used. In general, the number of colors needed is a function of the differential equation, the region and boundary conditions, and the particular finite element used for the discretization.

  2. NASARC - NUMERICAL ARC SEGMENTATION ALGORITHM FOR A RADIO CONFERENCE

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.

    1994-01-01

    NASARC was developed from the general planning principles and decisions of both sessions of the World Administrative Radio Conference on the Use of the Geostationary Satellite Orbit and on the Planning of Space Services Utilizing It (WARC-85, WARC-88). NASARC was written to help countries satisfy requirements for nation-wide Fixed Satellite services from at least one orbital position within a predetermined arc. The NASARC-generated predetermined arcs are each based on a common arc segment visible to a group of compatible service areas, and provide a means of generating a highly flexible allotment plan with a reduced need for coordination among administrations. The selection of particular groupings of service areas and their associated predetermined arcs is made according to a heuristic approach using several figures of merit designed to confront the most difficult allotment problems. NASARC attempts to select groupings and predetermined arc sizes so that the requirements of all administrations are met before the available orbital arc is exhausted. The predetermined arcs allow considerable freedom of choice in the positioning of space stations for all members of any grouping. The approach to allotment planning for which NASARC was designed consists of two phases. The first is the use of NASARC to identify predetermined arc segments common to groups of administrations. Those administrations within a group and sharing a common predetermined arc segment would be able to position their individual space stations at any one of a number of orbital positions within the predetermined arc. The second phase involves the use of a plan synthesis program (such as the ORBIT program resident at the International Frequency Registration Board in Geneva, Switzerland) to identify example scenarios of specific space station placements. NASARC software is modular, and consists of several programs to be run in sequence. The grouping module, NASARC1, identifies compatible groups of several service areas that are sufficiently separated geographically so that co-location or near co-location of their space stations will permit a user-specified downlink performance criterion to be satisfied. Pairwise compatibility between systems is assessed on the basis of the satellite separation required to meet this criterion. NASARC2 examines all groups of compatible administrations with their corresponding arc segments and computes a common predetermined arc. After an orbital slot of sufficient size has been found, NASARC2 calculates the required orbital separation between the critical group and its potential east and west neighbors and determines predetermined arc placement accordingly. NASARC3 updates and extends the feasible orbital locations for predetermined arcs associated with compatible groups of service areas to provide flexibility for rearrangement if necessary. NASARC4 performs rearrangement of predetermined arc segments where rearrangement will provide increased total arc available for subsequent placement of additional predetermined arcs and produces the final output report of the NASARC package. In addition to planning assumed homogeneous systems, NASARC can take into account such factors as rain attenuation, individual antenna parameters, power calculation options, minimum power values, different required carrier-to-interface ratios, variable grouping criteria, and affiliated sets of service areas. The modules allow the baseline assumptions to be modified, some on an individual service area basis. NASARC array dimensions have been structured to fit within the currently available 12MB memory capacity of the International Frequency Registration Board computer facility. NASARC was written in ANSI standard FORTRAN 77 and developed on an AMDAHL 5860 running under the IBM VM operating system. The package requires 8.1MB of central memory. NASARC (version 4.0) was written in 1988. IBM and VM are registered trademarks of International Business Machines. AMDAHL 5860 is a trademark of Amdahl Corporation.

  3. Numerical Arc-Segmentation Algorithm For A Radio Conference

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Ponchak, Denise S.; Heyward, A. O.; Zuzek, John E.; Spence, R. L.

    1992-01-01

    NASARC computer program developed from general planning principles and decisions of both sessions of World Administrative Radio Conference on Use of Geostationary Satellite Orbit and on Planning of Space Services Utilizing It (WARC-85 and WARC-88). Written to help countries satisfy requirements for nationwide fixed-satellite services from at least one orbital position within predetermined arc. Written in ANSI standard FORTRAN 77.

  4. Evaluating numerical ODE/DAE methods, algorithms and software

    NASA Astrophysics Data System (ADS)

    Soderlind, Gustaf; Wang, Lina

    2006-01-01

    Until recently, the testing of ODE/DAE software has been limited to simple comparisons and benchmarking. The process of developing software from a mathematically specified method is complex: it entails constructing control structures and objectives, selecting iterative methods and termination criteria, choosing norms and many more decisions. Most software constructors have taken a heuristic approach to these design choices, and as a consequence two different implementations of the same method may show significant differences in performance. Yet it is common to try to deduce from software comparisons that one method is better than another. Such conclusions are not warranted, however, unless the testing is carried out under true ceteris paribus conditions. Moreover, testing is an empirical science and as such requires a formal test protocol; without it conclusions are questionable, invalid or even false.We argue that ODE/DAE software can be constructed and analyzed by proven, "standard" scientific techniques instead of heuristics. The goals are computational stability, reproducibility, and improved software quality. We also focus on different error criteria and norms, and discuss modifications to DASPK and RADAU5. Finally, some basic principles of a test protocol are outlined and applied to testing these codes on a variety of problems.

  5. A numerical method to study the dynamics of capillary fluid systems

    NASA Astrophysics Data System (ADS)

    Herrada, M. A.; Montanero, J. M.

    2016-02-01

    We propose a numerical approach to study both the nonlinear dynamics and linear stability of capillary fluid systems. In the nonlinear analysis, the time-dependent fluid region is mapped onto a fixed numerical domain through a coordinate transformation. The hydrodynamic equations are spatially discretized with the Chebyshev spectral collocation technique, while an implicit time advancement is performed using second-order backward finite differences. The resulting algebraic equations are solved with the iterative Newton-Raphson technique. The most novel aspect of the method is the fact that the elements of the Jacobian of the discretized system of equations are symbolic functions calculated before running the simulation. These functions are evaluated numerically in the Newton-Raphson iterations to find the solution at each time step, which reduces considerably the computing time. Besides, this numerical procedure can be easily adapted to solve the eigenvalue problem which determines the linear global modes of the capillary system. Therefore, both the nonlinear dynamics and the linear stability analysis can be conducted with essentially the same algorithm. We validate this numerical approach by studying the dynamics of a liquid bridge close to its minimum volume stability limit. The results are virtually the same as those obtained with other methods. The proposed approach proves to be much more computationally efficient than those other methods. Finally, we show the versatility of the method by calculating the linear global modes of a gravitational jet.

  6. Advanced computational research in materials processing for design and manufacturing

    SciTech Connect

    Zacharia, T.

    1995-04-01

    Advanced mathematical techniques and computer simulation play a major role in providing enhanced understanding of conventional and advanced materials processing operations. Development and application of mathematical models and computer simulation techniques can provide a quantitative understanding of materials processes and will minimize the need for expensive and time consuming trial- and error-based product development. As computer simulations and materials databases grow in complexity, high performance computing and simulation are expected to play a key role in supporting the improvements required in advanced material syntheses and processing by lessening the dependence on expensive prototyping and re-tooling. Many of these numerical models are highly compute-intensive. It is not unusual for an analysis to require several hours of computational time on current supercomputers despite the simplicity of the models being studied. For example, to accurately simulate the heat transfer in a 1-m{sup 3} block using a simple computational method requires 10`2 arithmetic operations per second of simulated time. For a computer to do the simulation in real time would require a sustained computation rate 1000 times faster than that achievable by current supercomputers. Massively parallel computer systems, which combine several thousand processors able to operate concurrently on a problem are expected to provide orders of magnitude increase in performance. This paper briefly describes advanced computational research in materials processing at ORNL. Continued development of computational techniques and algorithms utilizing the massively parallel computers will allow the simulation of conventional and advanced materials processes in sufficient generality.

  7. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    NASA Astrophysics Data System (ADS)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good

  8. Advances in Doppler OCT

    PubMed Central

    Liu, Gangjun; Chen, Zhongping

    2014-01-01

    We review the principle and some recent applications of Doppler optical coherence tomography (OCT). The advances of the phase-resolved Doppler OCT method are described. Functional OCT algorithms which are based on an extension of the phase-resolved scheme are also introduced. Recent applications of Doppler OCT for quantification of flow, imaging of microvasculature and vocal fold vibration, and optical coherence elastography are briefly discussed. PMID:24443649

  9. Numerical Aspects of Solving Differential Equations: Laboratory Approach for Students.

    ERIC Educational Resources Information Center

    Witt, Ana

    1997-01-01

    Describes three labs designed to help students in a first course on ordinary differential equations with three of the most common numerical difficulties they might encounter when solving initial value problems with a numerical software package. The goal of these labs is to help students advance to independent work on common numerical anomalies.…

  10. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  11. Alternating-direction implicit numerical solution of the time-dependent, three-dimensional, single fluid, resistive magnetohydrodynamic equations

    SciTech Connect

    Finan, C.H. III

    1980-12-01

    Resistive magnetohydrodynamics (MHD) is described by a set of eight coupled, nonlinear, three-dimensional, time-dependent, partial differential equations. A computer code, IMP (Implicit MHD Program), has been developed to solve these equations numerically by the method of finite differences on an Eulerian mesh. In this model, the equations are expressed in orthogonal curvilinear coordinates, making the code applicable to a variety of coordinate systems. The Douglas-Gunn algorithm for Alternating-Direction Implicit (ADI) temporal advancement is used to avoid the limitations in timestep size imposed by explicit methods. The equations are solved simultaneously to avoid syncronization errors.

  12. Numerical recipes for mold filling simulation

    SciTech Connect

    Kothe, D.; Juric, D.; Lam, K.; Lally, B.

    1998-07-01

    Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.

  13. NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.

    SciTech Connect

    LUCCIO, A.; D'IMPERIO, N.; MALITSKY, N.

    2005-09-12

    Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.

  14. Comparison of two numerical techniques for aerodynamic model identification

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    An algorithm, called the Minimal Residual QR algorithm, is presented to solve subset regression problems. It is shown that this scheme can be used as a numerically reliable implementation of the stepwise regression technique, which is widely used to identify an aerodynamic model from flight test data. This capability as well as the numerical superiority of this scheme over the stepwise regression technique is demonstrated in an experimental simulation study.

  15. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  16. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  17. Plasmon spectroscopy: Theoretical and numerical calculations, and optimization techniques

    NASA Astrophysics Data System (ADS)

    Rodríguez-Oliveros, Rogelio; Paniagua-Domínguez, Ramón; Sánchez-Gil, José A.; Macías, Demetrio

    2016-02-01

    We present an overview of recent advances in plasmonics, mainly concerning theoretical and numerical tools required for the rigorous determination of the spectral properties of complex-shape nanoparticles exhibiting strong localized surface plasmon resonances (LSPRs). Both quasistatic approaches and full electrodynamic methods are described, providing a thorough comparison of their numerical implementations. Special attention is paid to surface integral equation formulations, giving examples of their performance in complicated nanoparticle shapes of interest for their LSPR spectra. In this regard, complex (single) nanoparticle configurations (nanocrosses and nanorods) yield a hierarchy of multiple-order LSPR s with evidence of a rich symmetric or asymmetric (Fano-like) LSPR line shapes. In addition, means to address the design of complex geometries to retrieve LSPR spectra are commented on, with special interest in biologically inspired algorithms. Thewealth of LSPRbased applications are discussed in two choice examples, single-nanoparticle surface-enhanced Raman scattering (SERS) and optical heating, and multifrequency nanoantennas for fluorescence and nonlinear optics.

  18. Intelligent perturbation algorithms for space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.

  19. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  20. An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP

    NASA Astrophysics Data System (ADS)

    Moncet, J. L.

    2015-12-01

    We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from

  1. An analysis of a new stable partitioned algorithm for FSI problems. Part I: Incompressible flow and elastic solids

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    2014-07-01

    Stable partitioned algorithms for fluid-structure interaction (FSI) problems are developed and analyzed in this two-part paper. Part I describes an algorithm for incompressible flow coupled with compressible elastic solids, while Part II discusses an algorithm for incompressible flow coupled with structural shells. Importantly, these new added-mass partitioned (AMP) schemes are stable and retain full accuracy with no sub-iterations per time step, even in the presence of strong added-mass effects (e.g. for light solids). The numerical approach described here for bulk compressible solids extends the scheme of Banks et al. [1,2] for inviscid compressible flow, and uses Robin (mixed) boundary conditions with the fluid and solid solvers at the interface. The basic AMP Robin conditions, involving a linear combination of velocity and stress, are determined from the outgoing solid characteristic relation normal to the fluid-solid interface combined with the matching conditions on the velocity and traction. Two alternative forms of the AMP conditions are then derived depending on whether the fluid equations are advanced with a fractional-step method or not. The stability and accuracy of the AMP algorithm is evaluated for linearized FSI model problems; the full nonlinear case being left for future consideration. A normal mode analysis is performed to show that the new AMP algorithm is stable for any ratio of the solid and fluid densities, including the case of very light solids when added-mass effects are large. In contrast, it is shown that a traditional partitioned algorithm involving a Dirichlet-Neumann coupling for the same FSI problem is formally unconditionally unstable for any ratio of densities. Exact traveling wave solutions are derived for the FSI model problems, and these solutions are used to verify the stability and accuracy of the corresponding numerical results obtained from the AMP algorithm for the cases of light, medium and heavy solids.

  2. Translation and integration of numerical atomic orbitals in linear molecules.

    PubMed

    Heinäsmäki, Sami

    2014-02-14

    We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively. PMID:24527905

  3. Translation and integration of numerical atomic orbitals in linear molecules

    NASA Astrophysics Data System (ADS)

    Heinäsmäki, Sami

    2014-02-01

    We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.

  4. The quiet revolution of numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Bauer, Peter; Thorpe, Alan; Brunet, Gilbert

    2015-09-01

    Advances in numerical weather prediction represent a quiet revolution because they have resulted from a steady accumulation of scientific knowledge and technological advances over many years that, with only a few exceptions, have not been associated with the aura of fundamental physics breakthroughs. Nonetheless, the impact of numerical weather prediction is among the greatest of any area of physical science. As a computational problem, global weather prediction is comparable to the simulation of the human brain and of the evolution of the early Universe, and it is performed every day at major operational centres across the world.

  5. Descendants and advance directives.

    PubMed

    Buford, Christopher

    2014-01-01

    Some of the concerns that have been raised in connection to the use of advance directives are of the epistemic variety. Such concerns highlight the possibility that adhering to an advance directive may conflict with what the author of the directive actually wants (or would want) at the time of treatment. However, at least one objection to the employment of advance directives is metaphysical in nature. The objection to be discussed here, first formulated by Rebecca Dresser and labeled by Allen Buchanan as the slavery argument and David DeGrazia the someone else problem, aims to undermine the legitimacy of certain uses of advance directives by concluding that such uses rest upon an incorrect assumption about the identity over time of those ostensibly governed by the directives. There have been numerous attempts to respond to this objection. This paper aims to assess two strategies that have been pursued to cope with the problem. PMID:25743056

  6. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  7. Advanced Eddy current NDE steam generator tubing.

    SciTech Connect

    Bakhtiari, S.

    1999-03-29

    As part of a multifaceted project on steam generator integrity funded by the U.S. Nuclear Regulatory Commission, Argonne National Laboratory is carrying out research on the reliability of nondestructive evaluation (NDE). A particular area of interest is the impact of advanced eddy current (EC) NDE technology. This paper presents an overview of work that supports this effort in the areas of numerical electromagnetic (EM) modeling, data analysis, signal processing, and visualization of EC inspection results. Finite-element modeling has been utilized to study conventional and emerging EC probe designs. This research is aimed at determining probe responses to flaw morphologies of current interest. Application of signal processing and automated data analysis algorithms has also been addressed. Efforts have focused on assessment of frequency and spatial domain filters and implementation of more effective data analysis and display methods. Data analysis studies have dealt with implementation of linear and nonlinear multivariate models to relate EC inspection parameters to steam generator tubing defect size and structural integrity. Various signal enhancement and visualization schemes are also being evaluated and will serve as integral parts of computer-aided data analysis algorithms. Results from this research will ultimately be substantiated through testing on laboratory-grown and in-service-degraded tubes.

  8. The Numerical Tokamak Project (NTP) simulation of turbulent transport in the core plasma: A grand challenge in plasma physics

    SciTech Connect

    Not Available

    1993-12-01

    The long-range goal of the Numerical Tokamak Project (NTP) is the reliable prediction of tokamak performance using physics-based numerical tools describing tokamak physics. The NTP is accomplishing the development of the most advanced particle and extended fluid model`s on massively parallel processing (MPP) environments as part of a multi-institutional, multi-disciplinary numerical study of tokamak core fluctuations. The NTP is a continuing focus of the Office of Fusion Energy`s theory and computation program. Near-term HPCC work concentrates on developing a predictive numerical description of the core plasma transport in tokamaks driven by low-frequency collective fluctuations. This work addresses one of the greatest intellectual challenges to our understanding of the physics of tokamak performance and needs the most advanced computational resources to progress. We are conducting detailed comparisons of kinetic and fluid numerical models of tokamak turbulence. These comparisons are stimulating the improvement of each and the development of hybrid models which embody aspects of both. The combination of emerging massively parallel processing hardware and algorithmic improvements will result in an estimated 10**2--10**6 performance increase. Development of information processing and visualization tools is accelerating our comparison of computational models to one another, to experimental data, and to analytical theory, providing a bootstrap effect in our understanding of the target physics. The measure of success is the degree to which the experimentally observed scaling of fluctuation-driven transport may be predicted numerically. The NTP is advancing the HPCC Initiative through its state-of-the-art computational work. We are pushing the capability of high performance computing through our efforts which are strongly leveraged by OFE support.

  9. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  10. A numerical scheme for ionizing shock waves

    SciTech Connect

    Aslan, Necdet . E-mail: naslan@yeditepe.edu.tr; Mond, Michael

    2005-12-10

    A two-dimensional (2D) visual computer code to solve the steady state (SS) or transient shock problems including partially ionizing plasma is presented. Since the flows considered are hypersonic and the resulting temperatures are high, the plasma is partially ionized. Hence the plasma constituents are electrons, ions and neutral atoms. It is assumed that all the above species are in thermal equilibrium, namely, that they all have the same temperature. The ionization degree is calculated from Saha equation as a function of electron density and pressure by means of a nonlinear Newton type root finding algorithms. The code utilizes a wave model and numerical fluctuation distribution (FD) scheme that runs on structured or unstructured triangular meshes. This scheme is based on evaluating the mesh averaged fluctuations arising from a number of waves and distributing them to the nodes of these meshes in an upwind manner. The physical properties (directions, strengths, etc.) of these wave patterns are obtained by a new wave model: ION-A developed from the eigen-system of the flux Jacobian matrices. Since the equation of state (EOS) which is used to close up the conservation laws includes electronic effects, it is a nonlinear function and it must be inverted by iterations to determine the ionization degree as a function of density and temperature. For the time advancement, the scheme utilizes a multi-stage Runge-Kutta (RK) algorithm with time steps carefully evaluated from the maximum possible propagation speed in the solution domain. The code runs interactively with the user and allows to create different meshes to use different initial and boundary conditions and to see changes of desired physical quantities in the form of color and vector graphics. The details of the visual properties of the code has been published before (see [N. Aslan, A visual fluctuation splitting scheme for magneto-hydrodynamics with a new sonic fix and Euler limit, J. Comput. Phys. 197 (2004) 1

  11. A spectral canonical electrostatic algorithm

    NASA Astrophysics Data System (ADS)

    Webb, Stephen D.

    2016-03-01

    Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton’s principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm’s energy- and momentum-conserving properties.

  12. Intra-and-Inter Species Biomass Prediction in a Plantation Forest: Testing the Utility of High Spatial Resolution Spaceborne Multispectral RapidEye Sensor and Advanced Machine Learning Algorithms

    PubMed Central

    Dube, Timothy; Mutanga, Onisimo; Adam, Elhadi; Ismail, Riyad

    2014-01-01

    The quantification of aboveground biomass using remote sensing is critical for better understanding the role of forests in carbon sequestration and for informed sustainable management. Although remote sensing techniques have been proven useful in assessing forest biomass in general, more is required to investigate their capabilities in predicting intra-and-inter species biomass which are mainly characterised by non-linear relationships. In this study, we tested two machine learning algorithms, Stochastic Gradient Boosting (SGB) and Random Forest (RF) regression trees to predict intra-and-inter species biomass using high resolution RapidEye reflectance bands as well as the derived vegetation indices in a commercial plantation. The results showed that the SGB algorithm yielded the best performance for intra-and-inter species biomass prediction; using all the predictor variables as well as based on the most important selected variables. For example using the most important variables the algorithm produced an R2 of 0.80 and RMSE of 16.93 t·ha−1 for E. grandis; R2 of 0.79, RMSE of 17.27 t·ha−1 for P. taeda and R2 of 0.61, RMSE of 43.39 t·ha−1 for the combined species data sets. Comparatively, RF yielded plausible results only for E. dunii (R2 of 0.79; RMSE of 7.18 t·ha−1). We demonstrated that although the two statistical methods were able to predict biomass accurately, RF produced weaker results as compared to SGB when applied to combined species dataset. The result underscores the relevance of stochastic models in predicting biomass drawn from different species and genera using the new generation high resolution RapidEye sensor with strategically positioned bands. PMID:25140631

  13. Wavelet Algorithms for Illumination Computations

    NASA Astrophysics Data System (ADS)

    Schroder, Peter

    One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.

  14. Waste glass melter numerical and physical modeling

    SciTech Connect

    Eyler, L.L.; Peters, R.D.; Lessor, D.L.; Lowery, P.S.; Elliott, M.L.

    1991-10-01

    Results of physical and numerical simulation modeling of high-level liquid waste vitrification melters are presented. Physical modeling uses simulant fluids in laboratory testing. Visualization results provide insight into convective melt flow patterns from which information is derived to support performance estimation of operating melters and data to support numerical simulation. Numerical simulation results of several melter configurations are presented. These are in support of programs to evaluate melter operation characteristics and performance. Included are investigations into power skewing and alternating current electric field phase angle in a dual electrode pair reference design and bi-modal convective stability in an advanced design. 9 refs., 9 figs., 1 tab.

  15. A Collaborative Recommend Algorithm Based on Bipartite Community

    PubMed Central

    Fu, Yuchen; Liu, Quan; Cui, Zhiming

    2014-01-01

    The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393

  16. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  17. Numerical simulation of steady supersonic flow. [spatial marching

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.; Steger, J. L.

    1981-01-01

    A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.

  18. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  19. A quasi-Monte Carlo Metropolis algorithm

    PubMed Central

    Owen, Art B.; Tribble, Seth D.

    2005-01-01

    This work presents a version of the Metropolis–Hastings algorithm using quasi-Monte Carlo inputs. We prove that the method yields consistent estimates in some problems with finite state spaces and completely uniformly distributed inputs. In some numerical examples, the proposed method is much more accurate than ordinary Metropolis–Hastings sampling. PMID:15956207

  20. A Low-Stress Algorithm for Fractions

    ERIC Educational Resources Information Center

    Ruais, Ronald W.

    1978-01-01

    An algorithm is given for the addition and subtraction of fractions based on dividing the sum of diagonal numerator and denominator products by the product of the denominators. As an explanation of the teaching method, activities used in teaching are demonstrated. (MN)