Science.gov

Sample records for algorithm components numerical

  1. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  2. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  3. Trees, bialgebras and intrinsic numerical algorithms

    NASA Technical Reports Server (NTRS)

    Crouch, Peter; Grossman, Robert; Larson, Richard

    1990-01-01

    Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.

  4. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  5. Multiresolution representation and numerical algorithms: A brief review

    NASA Technical Reports Server (NTRS)

    Harten, Amiram

    1994-01-01

    In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.

  6. Software Management Environment (SME): Components and algorithms

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1994-01-01

    This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'

  7. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  8. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  9. The numerical simulation of accelerator components

    SciTech Connect

    Herrmannsfeldt, W.B.; Hanerfeld, H.

    1987-05-01

    The techniques of the numerical simulation of plasmas can be readily applied to problems in accelerator physics. Because the problems usually involve a single component ''plasma,'' and times that are at most, a few plasma oscillation periods, it is frequently possible to make very good simulations with relatively modest computation resources. We will discuss the methods and illustrate them with several examples. One of the more powerful techniques of understanding the motion of charged particles is to view computer-generated motion pictures. We will show several little movie strips to illustrate the discussions. The examples will be drawn from the application areas of Heavy Ion Fusion, electron-positron linear colliders and injectors for free-electron lasers. 13 refs., 10 figs., 2 tabs.

  10. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1982-01-01

    Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.

  11. Numerical Algorithm for Delta of Asian Option

    PubMed Central

    Zhang, Boxiang; Yu, Yang; Wang, Weiguo

    2015-01-01

    We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271

  12. Numerical Algorithm for Delta of Asian Option.

    PubMed

    Zhang, Boxiang; Yu, Yang; Wang, Weiguo

    2015-01-01

    We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271

  13. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  14. A Numerical Instability in an ADI Algorithm for Gyrokinetics

    SciTech Connect

    E.A. Belli; G.W. Hammett

    2004-12-17

    We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v{sub {parallel}} {partial_derivative}/{partial_derivative}z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms.

  15. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1981-01-01

    Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.

  16. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  17. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  18. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  19. Efficient algorithm to compute mutually connected components in interdependent networks.

    PubMed

    Hwang, S; Choi, S; Lee, Deokjae; Kahng, B

    2015-02-01

    Mutually connected components (MCCs) play an important role as a measure of resilience in the study of interdependent networks. Despite their importance, an efficient algorithm to obtain the statistics of all MCCs during the removal of links has thus far been absent. Here, using a well-known fully dynamic graph algorithm, we propose an efficient algorithm to accomplish this task. We show that the time complexity of this algorithm is approximately O(N(1.2)) for random graphs, which is more efficient than O(N(2)) of the brute-force algorithm. We confirm the correctness of our algorithm by comparing the behavior of the order parameter as links are removed with existing results for three types of double-layer multiplex networks. We anticipate that this algorithm will be used for simulations of large-size systems that have been previously inaccessible. PMID:25768559

  20. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  1. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  2. Experimentally constructing finite difference algorithms in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew; Neilsen, David; Matzner, Richard

    2002-04-01

    Computational studies of gravitational waves require numerical algorithms with long-term stability (necessary for convergence). However, constructing stable finite difference algorithms (FDA) for the ADM formulation of the Einstein equations, especially in multiple dimensions, has proven difficult. Most FDA's are constructed using rules of thumb gained from experience with simple model equations. To search for FDA's with improved stability, we adopt a brute-force approach, where we systematically test thousands of numerical schemes. We sort the spatial derivatives of the Einstein equations into groups, and parameterize each group by finite difference type (centered or upwind) and order. Furthermore, terms proportional to the constraints are added to the evolution equations with additional parameters. A spherically symmetric, excised Schwarzschild black hole (one dimension) and linearized waves in multiple dimensions are used as model systems to evaluate the different numerical schemes.

  3. Determining the Numerical Stability of Quantum Chemistry Algorithms.

    PubMed

    Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim

    2011-08-01

    We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided. PMID:26606614

  4. Two Strategies to Speed up Connected Component LabelingAlgorithms

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Suzuki, Kenji

    2005-11-13

    This paper presents two new strategies to speed up connectedcomponent labeling algorithms. The first strategy employs a decisiontreeto minimize the work performed in the scanning phase of connectedcomponent labeling algorithms. The second strategy uses a simplifiedunion-find data structure to represent the equivalence information amongthe labels. For 8-connected components in atwo-dimensional (2D) image,the first strategy reduces the number of neighboring pixels visited from4 to7/3 on average. In various tests, using a decision tree decreases thescanning time by a factor of about 2. The second strategy uses a compactrepresentation of the union-find data structure. This strategysignificantly speeds up the labeling algorithms. We prove analyticallythat a labeling algorithm with our simplified union-find structure hasthe same optimal theoretical time complexity as do the best labelingalgorithms. By extensive experimental measurements, we confirm theexpected performance characteristics of the new labeling algorithms anddemonstrate that they are faster than other optimal labelingalgorithms.

  5. An algorithm for the numerical solution of linear differential games

    SciTech Connect

    Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.

  6. A numerical algorithm for magnetohydrodynamics of ablated materials.

    PubMed

    Lu, Tianshi; Du, Jian; Samulyak, Roman

    2008-07-01

    A numerical algorithm for the simulation of magnetohydrodynamics in partially ionized ablated material is described. For the hydro part, the hyperbolic conservation laws with electromagnetic terms is solved using techniques developed for free surface flows; for the electromagnetic part, the electrostatic approximation is applied and an elliptic equation for electric potential is solved. The algorithm has been implemented in the frame of front tracking, which explicitly tracks geometrically complex evolving interfaces. An elliptic solver based on the embedded boundary method were implemented for both two- and three-dimensional simulations. A surface model on the interface between the solid target and the ablated vapor has also been developed as well as a numerical model for the equation of state which accounts for atomic processes in the ablated material. The code has been applied to simulations of the pellet ablation in a magnetically confined plasma and the laser-ablated plasma plume expansion in magnetic fields. PMID:19051925

  7. Algorithms for the Fractional Calculus: A Selection of Numerical Methods

    NASA Technical Reports Server (NTRS)

    Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.

    2003-01-01

    Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.

  8. Numerical algorithms for the direct spectral transform with applications to nonlinear Schroedinger type systems

    SciTech Connect

    Burtsev, S.; Camassa, R.; Timofeyev, I.

    1998-11-20

    The authors implement two different algorithms for computing numerically the direct Zakharov-Shabat eigenvalue problem on the infinite line. The first algorithm replaces the potential in the eigenvalue problem by a piecewise-constant approximation, which allows one to solve analytically the corresponding ordinary differential equation. The resulting algorithm is of second order in the step size. The second algorithm uses the fourth-order Runge-Kutta method. They test and compare the performance of these two algorithms on three exactly solvable potentials. They find that even though the Runge-Kutta method is of higher order, this extra accuracy can be lost because of the additional dependence of its numerical error on the eigenvalue. this limits the usefulness of the Runge-Kutta algorithm to a region inside the unit circle around the origin in the complex plane of the eigenvalues. For the computation of the continuous spectrum density, this limitation is particularly severe, as revealed by the spectral decomposition of the L{sup 2}-norm of a solution to the nonlinear Schroedinger equation. They show that no such limitations exist for the piecewise-constant algorithm. In particular, this scheme converges uniformly for both continuous and discrete spectrum components.

  9. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.

    2016-01-01

    Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

  10. Algorithm-Based Fault Tolerance for Numerical Subroutines

    NASA Technical Reports Server (NTRS)

    Tumon, Michael; Granat, Robert; Lou, John

    2007-01-01

    A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

  11. CxCxC: compressed connected components labeling algorithm

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Dwivedi, Shekhar

    2007-03-01

    We propose Compressed Connected Components (CxCxC), a new fast algorithm for labeling connected components in binary images making use of compression. We break the given 3D image into non-overlapping 2x2x2 cube of voxels (2x2 square of pixels for 2D) and encode these binary values as the bits of a single decimal integer. We perform the connected component labeling on the resulting compressed data set. A recursive labeling approach by the use of smart-masks on the encoded decimal values is performed. The output is finally decompressed back to the original size by decimal-to-binary conversion of the cubes to retrieve the connected components in a lossless fashion. We demonstrate the efficacy of such encoding and labeling for large data sets (up to 1392 x 1040 for 2D and 512 x 512 x 336 for 3D). CxCxC reports a speed gain of 4x for 2D and 12x for 3D with memory savings of 75% for 2D and 88% for 3D over conventional (recursive growing of component labels) connected components algorithm. We also compare our method with those of VTK and ITK and find that we outperform both with speed gains of 3x and 6x for 3D. These features make CxCxC highly suitable for medical imaging and multi-media applications where the size of data sets and the number of connected components can be very large.

  12. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  13. A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries

    SciTech Connect

    Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P

    2003-12-15

    We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.

  14. Fast independent component analysis algorithm for quaternion valued signals.

    PubMed

    Javidi, Soroush; Took, Clive Cheong; Mandic, Danilo P

    2011-12-01

    An extension of the fast independent component analysis algorithm is proposed for the blind separation of both Q-proper and Q-improper quaternion-valued signals. This is achieved by maximizing a negentropy-based cost function, and is derived rigorously using the recently developed HR calculus in order to implement Newton optimization in the augmented quaternion statistics framework. It is shown that the use of augmented statistics and the associated widely linear modeling provides theoretical and practical advantages when dealing with general quaternion signals with noncircular (rotation-dependent) distributions. Simulations using both benchmark and real-world quaternion-valued signals support the approach. PMID:22027374

  15. Numerical algorithms for steady and unsteady incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Hafez, Mohammed; Dacles, Jennifer

    1989-01-01

    The numerical analysis of the incompressible Navier-Stokes equations are becoming important tools in the understanding of some fluid flow problems which are encountered in research as well as in industry. With the advent of the supercomputers, more realistic problems can be studied with a wider choice of numerical algorithms. An alternative formulation is presented for viscous incompressible flows. The incompressible Navier-Stokes equations are cast in a velocity/vorticity formulation. This formulation consists of solving the Poisson equations for the velocity components and the vorticity transport equation. Two numerical algorithms for the steady two-dimensional laminar flows are presented. The first method is based on the actual partial differential equations. This uses a finite-difference approximation of the governing equations on a staggered grid. The second method uses a finite element discretization with the vorticity transport equation approximated using a Galerkin approximation and the Poisson equations are obtained using a least squares method. The equations are solved efficiently using Newton's method and a banded direct matrix solver (LINPACK). The method is extended to steady three-dimensional laminar flows and applied to a cubic driven cavity using finite difference schemes and a staggered grid arrangement on a Cartesian mesh. The equations are solved iteratively using a plane zebra relaxation scheme. Currently, a two-dimensional, unsteady algorithm is being developed using a generalized coordinate system. The equations are discretized using a finite-volume approach. This work will then be extended to three-dimensional flows.

  16. Automatic algorithm to decompose discrete paths of fractional Brownian motion into self-similar intrinsic components

    NASA Astrophysics Data System (ADS)

    Vamoş, Călin; Crăciun, Maria; Suciu, Nicolae

    2015-10-01

    Fractional Brownian motion (fBm) is a nonstationary self-similar continuous stochastic process used to model many natural phenomena. A realization of the fBm can be numerically approximated by discrete paths which do not entirely preserve the self-similarity. We investigate the self-similarity at different time scales by decomposing the discrete paths of fBm into intrinsic components. The decomposition is realized by an automatic numerical algorithm based on successive smoothings stopped when the maximum monotonic variation of the averaged time series is reached. The spectral properties of the intrinsic components are analyzed through the monotony spectrum defined as the graph of the amplitudes of the monotonic segments with respect to their lengths (characteristic times). We show that, at intermediate time scales, the mean amplitude of the intrinsic components of discrete fBms scales with the mean characteristic time as a power law identical to that of the corresponding continuous fBm. As an application we consider hydrological time series of the transverse component of the transport process generated as a superposition of diffusive movements on advective transport in random velocity fields. We found that the transverse component has a rich structure of scales, which is not revealed by the analysis of the global variance, and that its intrinsic components may be self-similar only in particular cases.

  17. The association between symbolic and nonsymbolic numerical magnitude processing and mental versus algorithmic subtraction in adults.

    PubMed

    Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert

    2016-03-01

    There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge. PMID:26914586

  18. Numerical Analysis Of Three Component Induction Logging In Geothermal Reservoirs

    SciTech Connect

    Dr. David L. Alumbaugh

    2002-01-09

    This project is supporting the development of the ''Geo-Bilt'', geothermal electromagnetic-induction logging tool that is being built by ElectroManetic Instruments, Inc. The tool consists of three mutually orthogonal magnetic field antennas, and three-component magnetic field receivers located at different distances from the source. In its current configuration, the source that has a moment aligned along the borehole axis consists of a 1m long solenoid, while the two trans-axial sources consist of 1m by 8cm loops of wire. The receivers are located 2m and 5m away from the center of the sources, and five frequencies from 2 kHz to 40 kHz are being employed. This study is numerically investigating (1) the effect of the borehole on the measurements, and (2) the sensitivity of the tool to fracture zone-geometries that might be encountered in a geothermal field. The benefits of the results are that they will lead to a better understanding of the data that the tool produces during its testing phase and an idea of what the limitations of the tool are.

  19. A fast algorithm for numerical solutions to Fortet's equation

    NASA Astrophysics Data System (ADS)

    Brumen, Gorazd

    2008-10-01

    A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.

  20. An Image Reconstruction Algorithm for Electrical Capacitance Tomography Based on Robust Principle Component Analysis

    PubMed Central

    Lei, Jing; Liu, Shi; Wang, Xueyao; Liu, Qibin

    2013-01-01

    Electrical capacitance tomography (ECT) attempts to reconstruct the permittivity distribution of the cross-section of measurement objects from the capacitance measurement data, in which reconstruction algorithms play a crucial role in real applications. Based on the robust principal component analysis (RPCA) method, a dynamic reconstruction model that utilizes the multiple measurement vectors is presented in this paper, in which the evolution process of a dynamic object is considered as a sequence of images with different temporal sparse deviations from a common background. An objective functional that simultaneously considers the temporal constraint and the spatial constraint is proposed, where the images are reconstructed by a batching pattern. An iteration scheme that integrates the advantages of the alternating direction iteration optimization (ADIO) method and the forward-backward splitting (FBS) technique is developed for solving the proposed objective functional. Numerical simulations are implemented to validate the feasibility of the proposed algorithm. PMID:23385418

  1. Numerical analysis of EPR spectra. 7. The simplex algorithm

    NASA Astrophysics Data System (ADS)

    Beckwith, Athelstan L. J.; Brumby, Steven

    The Simplex algorithm is well suited to the least-squares analysis of highly complex EPR spectra. The application of the algorithm to the analysis of the spectra of benzo[ a]pyrenyl-6-oxy, chloro(methoxycarbonyl)methyl, and cyano(methoxy)methyl free radicals is described.

  2. A Parallel Algorithm for Connected Component Labelling of Gray-scale Images on Homogeneous Multicore Architectures

    NASA Astrophysics Data System (ADS)

    Niknam, Mehdi; Thulasiraman, Parimala; Camorlinga, Sergio

    2010-11-01

    Connected component labelling is an essential step in image processing. We provide a parallel version of Suzuki's sequential connected component algorithm in order to speed up the labelling process. Also, we modify the algorithm to enable labelling gray-scale images. Due to the data dependencies in the algorithm we used a method similar to pipeline to exploit parallelism. The parallel algorithm method achieved a speedup of 2.5 for image size of 256 × 256 pixels using 4 processing threads.

  3. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  4. A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results

    NASA Astrophysics Data System (ADS)

    Carrano, Charles S.; Rino, Charles L.

    2016-06-01

    We extend the power law phase screen theory for ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component inverse power law spectrum. The two-component model includes, as special cases, an unmodified power law and a modified power law with spectral break that may assume the role of an outer scale, intermediate break scale, or inner scale. As such, it provides a framework for investigating the effects of a spectral break on the scintillation statistics. Using this spectral model, we solve the fourth moment equation governing intensity variations following propagation through two-dimensional field-aligned irregularities in the ionosphere. A specific normalization is invoked that exploits self-similar properties of the structure to achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. The numerical algorithm is validated using new theoretical predictions for the behavior of the scintillation index and intensity correlation length under strong scatter conditions. A series of numerical experiments are conducted to investigate the morphologies of the intensity spectrum, scintillation index, and intensity correlation length as functions of the spectral indices and strength of scatter; retrieve phase screen parameters from intensity scintillation observations; explore the relative contributions to the scintillation due to large- and small-scale ionospheric structures; and quantify the conditions under which a general spectral break will influence the scintillation statistics.

  5. Numerical Optimization Algorithms and Software for Systems Biology

    SciTech Connect

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  6. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  7. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Voigt, Robert G.; Romine, Charles H.

    1988-01-01

    This is a bibliography on numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are also listed.

  8. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1987-01-01

    This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also.

  9. Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.

    ERIC Educational Resources Information Center

    Jacquot, Raymond G.; And Others

    1985-01-01

    Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)

  10. Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving

    2014-02-01

    The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.

  11. The selection of optimal ICA algorithm parameters for robust AEP component estimates using 3 popular ICA algorithms.

    PubMed

    Castañeda-Villa, N; James, C J

    2008-01-01

    Many authors have used the Auditory Evoked Potential (AEP) recordings to evaluate the performance of their ICA algorithms and have demonstrated that this procedure can remove the typical EEG artifact in these recordings (i.e. blinking, muscle noise, line noise, etc.). However, there is little work in the literature about the optimal parameters, for each of those algorithms, for the estimation of the AEP components to reliably recover both the auditory response and the specific artifacts generated for the normal function of a Cochlear Implant (CI), used for the rehabilitation of deaf people. In this work we determine the optimal parameters of three ICA algorithms, each based on different independence criteria, and assess the resulting estimations of both the auditory response and CI artifact. We show that the algorithm utilizing temporal structure, such as TDSEP-ICA, is better in estimating the components of the auditory response, in recordings contaminated by CI artifacts, than higher order statistics based algorithms. PMID:19163893

  12. Flexible, efficient and robust algorithm for parallel execution and coupling of components in a framework

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor

    2006-05-01

    We describe a general algorithm suitable for executing and coupling components of a software framework on a parallel computer. The requirements of a flexible, efficient and robust algorithm are defined precisely, and the motivation for the requirements is demonstrated on several examples. In short, the requirements are the following: (i) the algorithm should allow arbitrary distribution of processors among the components, (ii) it should allow arbitrary coupling schedule between the components, (iii) it should not use any inter-processor communication other than already required by the components and their couplings, and (iv) it should never get into a dead-lock. We show that the proposed algorithm based on the Temporal and Predefined Ordering of Tasks (TPOT) satisfies all these requirements. The TPOT algorithm has been implemented in the Space Weather Modeling Framework. The flexibility and efficiency of the algorithm is demonstrated with several examples.

  13. Numerical Simulation of Cast Distortion in Gas Turbine Engine Components

    NASA Astrophysics Data System (ADS)

    Inozemtsev, A. A.; Dubrovskaya, A. S.; Dongauser, K. A.; Trufanov, N. A.

    2015-06-01

    In this paper the process of multiple airfoilvanes manufacturing through investment casting is considered. The mathematical model of the full contact problem is built to determine stress strain state in a cast during the process of solidification. Studies are carried out in viscoelastoplastic statement. Numerical simulation of the explored process is implemented with ProCASTsoftware package. The results of simulation are compared with the real production process. By means of computer analysis the optimization of technical process parameters is done in order to eliminate the defect of cast walls thickness variation.

  14. An Algorithm for the Hierarchical Organization of Path Diagrams and Calculation of Components of Expected Covariance.

    ERIC Educational Resources Information Center

    Boker, Steven M.; McArdle, J. J.; Neale, Michael

    2002-01-01

    Presents an algorithm for the production of a graphical diagram from a matrix formula in such a way that its components are logically and hierarchically arranged. The algorithm, which relies on the matrix equations of J. McArdle and R. McDonald (1984), calculates the individual path components of expected covariance between variables and…

  15. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2004-04-26

    Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.

  16. Thrombosis modeling in intracranial aneurysms: a lattice Boltzmann numerical algorithm

    NASA Astrophysics Data System (ADS)

    Ouared, R.; Chopard, B.; Stahl, B.; Rüfenacht, D. A.; Yilmaz, H.; Courbebaisse, G.

    2008-07-01

    The lattice Boltzmann numerical method is applied to model blood flow (plasma and platelets) and clotting in intracranial aneurysms at a mesoscopic level. The dynamics of blood clotting (thrombosis) is governed by mechanical variations of shear stress near wall that influence platelets-wall interactions. Thrombosis starts and grows below a shear rate threshold, and stops above it. Within this assumption, it is possible to account qualitatively well for partial, full or no occlusion of the aneurysm, and to explain why spontaneous thrombosis is more likely to occur in giant aneurysms than in small or medium sized aneurysms.

  17. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    SciTech Connect

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  18. Thermal contact algorithms in SIERRA mechanics : mathematical background, numerical verification, and evaluation of performance.

    SciTech Connect

    Copps, Kevin D.; Carnes, Brian R.

    2008-04-01

    We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.

  19. A wavelet based algorithm for the identification of oscillatory event-related potential components.

    PubMed

    Aniyan, Arun Kumar; Philip, Ninan Sajeeth; Samar, Vincent J; Desjardins, James A; Segalowitz, Sidney J

    2014-08-15

    Event related potentials (ERPs) are very feeble alterations in the ongoing electroencephalogram (EEG) and their detection is a challenging problem. Based on the unique time-based parameters derived from wavelet coefficients and the asymmetry property of wavelets a novel algorithm to separate ERP components in single-trial EEG data is described. Though illustrated as a specific application to N170 ERP detection, the algorithm is a generalized approach that can be easily adapted to isolate different kinds of ERP components. The algorithm detected the N170 ERP component with a high level of accuracy. We demonstrate that the asymmetry method is more accurate than the matching wavelet algorithm and t-CWT method by 48.67 and 8.03 percent, respectively. This paper provides an off-line demonstration of the algorithm and considers issues related to the extension of the algorithm to real-time applications. PMID:24931710

  20. Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning

    NASA Astrophysics Data System (ADS)

    Bradley, Ben K.

    Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and

  1. A stable and efficient numerical algorithm for unconfined aquifer analysis.

    PubMed

    Keating, Elizabeth; Zyvoloski, George

    2009-01-01

    The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well. PMID:19341374

  2. A stable and efficient numerical algorithm for unconfined aquifer analysis

    SciTech Connect

    Keating, Elizabeth; Zyvoloski, George

    2008-01-01

    The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.

  3. Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms

    NASA Astrophysics Data System (ADS)

    Brunner, Christopher W.; Lu, Ping

    2012-09-01

    The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.

  4. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of

  5. A numerical comparison of discrete Kalman filtering algorithms - An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    An improved Kalman filter algorithm based on a modified Givens matrix triangularization technique is proposed for solving a nonstationary discrete-time linear filtering problem. The proposed U-D covariance factorization filter uses orthogonal transformation technique; measurement and time updating of the U-D factors involve separate application of Gentleman's fast square-root-free Givens rotations. Numerical stability and accuracy of the algorithm are compared with those of the conventional and stabilized Kalman filters and the Potter-Schmidt square-root filter, by applying these techniques to a realistic planetary navigation problem (orbit determination for the Saturn approach phase of the Mariner Jupiter-Saturn Mission, 1977). The new algorithm is shown to combine the numerical precision of square root filtering with the efficiency of the original Kalman algorithm.

  6. Numerical optimization algorithm for rotationally invariant multi-orbital slave-boson method

    NASA Astrophysics Data System (ADS)

    Quan, Ya-Min; Wang, Qing-wei; Liu, Da-Yong; Yu, Xiang-Long; Zou, Liang-Jian

    2015-06-01

    We develop a generalized numerical optimization algorithm for the rotationally invariant multi-orbital slave boson approach, which is applicable for arbitrary boundary constraints of high-dimensional objective function by combining several classical optimization techniques. After constructing the calculation architecture of rotationally invariant multi-orbital slave boson model, we apply this optimization algorithm to find the stable ground state and magnetic configuration of two-orbital Hubbard models. The numerical results are consistent with available solutions, confirming the correctness and accuracy of our present algorithm. Furthermore, we utilize it to explore the effects of the transverse Hund's coupling terms on metal-insulator transition, orbital selective Mott phase and magnetism. These results show the quick convergency and robust stable character of our algorithm in searching the optimized solution of strongly correlated electron systems.

  7. An Experimental Investigation and Numerical Analysis of Multi-Component Fuel Spray

    NASA Astrophysics Data System (ADS)

    Myong, Kwang-Jae; Arai, Motoyuki; Tanaka, Tomoyuki; Senda, Jiro; Fujimoto, Hajime

    In this study, droplet atomization and vaporization characteristics with multi-component fuel were investigated by experimental and numerical simulation methods. Spray characteristics of multi-component fuel including spray cone angle, spray angle and spray tip penetration were analyzed from shadowgraph imaging. Numerical simulation to investigate spatial distribution of fuel-vapor concentration of each component within multi-component fuel was implemented in KIVA code. Vaporization process was calculated by a simple two-phase region which was approximated by modified saturated liquid-vapor line. Experimental results show that spray cone angle and spray angle become larger increasing in mass fraction of low boiling point component. And spray tip penetration becomes shorter with increasing in mass fraction of low boiling point component in vaporizing spray during that is same on every mixed fuel in non-vaporizing spray. From numerical simulation results, temporal and spatial distribution of each fuel vapor concentration was found to be stratification.

  8. Alternating Least Squares Algorithms for Simultaneous Components Analysis with Equal Component Weight Matrices in Two or More Populations.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; ten Berge, Jos M. F.

    1989-01-01

    Two alternating least squares algorithms are presented for the simultaneous components analysis method of R. E. Millsap and W. Meredith (1988). These methods, one for small data sets and one for large data sets, can indicate whether or not a global optimum for the problem has been attained. (SLD)

  9. Seven-spot ladybird optimization: a novel and efficient metaheuristic algorithm for numerical optimization.

    PubMed

    Wang, Peng; Zhu, Zhouquan; Huang, Shuai

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879

  10. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  11. Seven-Spot Ladybird Optimization: A Novel and Efficient Metaheuristic Algorithm for Numerical Optimization

    PubMed Central

    Zhu, Zhouquan

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879

  12. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data

    PubMed Central

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. PMID:26858592

  13. Chemotactic and diffusive migration on a nonuniformly growing domain: numerical algorithm development and applications

    NASA Astrophysics Data System (ADS)

    Simpson, Matthew J.; Landman, Kerry A.; Newgreen, Donald F.

    2006-08-01

    A numerical algorithm to simulate chemotactic and/or diffusive migration on a one-dimensional growing domain is developed. The domain growth can be spatially nonuniform and the growth-derived advection term must be discretised. The hyperbolic terms in the conservation equations associated with chemotactic migration and domain growth are accurately discretised using an explicit central scheme. Generality of the algorithm is maintained using an operator split technique to simulate diffusive migration implicitly. The resulting algorithm is applicable for any combination of diffusive and/or chemotactic migration on a growing domain with a general growth-induced velocity field. The accuracy of the algorithm is demonstrated by testing the results against some simple analytical solutions and in an inter-code comparison. The new algorithm demonstrates that the form of nonuniform growth plays a critical role in determining whether a population of migratory cells is able to overcome the domain growth and fully colonise the domain.

  14. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  15. Fatigue analysis of WECS (Wind Energy Conversion System) components using a rainflow counting algorithm

    SciTech Connect

    Sutherland, H.J.; Schluter, L.L.

    1990-01-01

    A rainflow counting algorithm'' has been incorporated into the LIFE2 fatigue/fracture analysis code for wind turbines. The count algorithm, with its associated pre- and post-count algorithms, permits the code to incorporate time-series data into its analysis scheme. After a description of the algorithms used here, their use is illustrated by the examination of stress-time histories from the Sandia 34-m Test Bed vertical axis wind turbine. The results of the rainflow analysis are compared and contrasted to previously reported predictions for the service lifetime of the fatigue critical component for this turbine. 14 refs., 8 figs., 3 tabs.

  16. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  17. Algorithms for Blind Components Separation and Extraction from the Time-Frequency Distribution of Their Mixture

    NASA Astrophysics Data System (ADS)

    Barkat, B.; Abed-Meraim, K.

    2004-12-01

    We propose novel algorithms to select and extract separately all the components, using the time-frequency distribution (TFD), of a given multicomponent frequency-modulated (FM) signal. These algorithms do not use any a priori information about the various components. However, their performances highly depend on the cross-terms suppression ability and high time-frequency resolution of the considered TFD. To illustrate the usefulness of the proposed algorithms, we applied them for the estimation of the instantaneous frequency coefficients of a multicomponent signal and the results are compared with those of the higher-order ambiguity function (HAF) algorithm. Monte Carlo simulation results show the superiority of the proposed algorithms over the HAF.

  18. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  19. A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients

    SciTech Connect

    Alex, Arne; Delft, Jan von; Kalus, Matthias; Huckleberry, Alan

    2011-02-15

    We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).

  20. A new constrained fixed-point algorithm for ordering independent components

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjuan; Guo, Chonghui; Shi, Zhenwei; Feng, Enmin

    2008-10-01

    Independent component analysis (ICA) aims to recover a set of unknown mutually independent components (ICs) from their observed mixtures without knowledge of the mixing coefficients. In the classical ICA model there exists ICs' indeterminacy on permutation and dilation. Constrained ICA is one of methods for solving this problem through introducing constraints into the classical ICA model. In this paper we first present a new constrained ICA model which composed of three parts: a maximum likelihood criterion as an objective function, statistical measures as inequality constraints and the normalization of demixing matrix as equality constraints. Next, we incorporate the new fixed-point (newFP) algorithm into this constrained ICA model to construct a new constrained fixed-point algorithm. Computation simulations on synthesized signals and speech signals demonstrate that this combination both can eliminate ICs' indeterminacy to a certain extent, and can provide better performance. Moreover, comparison results with the existing algorithm verify the efficiency of our new algorithm furthermore, and show that it is more simple to implement than the existing algorithm due to its advantage of not using the learning rate. Finally, this new algorithm is also applied for the real-world fetal ECG data, experiment results further indicate the efficiency of the new constrained fixed-point algorithm.

  1. Dynamics analysis of electrodynamic satellite tethers. Equations of motion and numerical solution algorithms for the tether

    NASA Technical Reports Server (NTRS)

    Nacozy, P. E.

    1984-01-01

    The equations of motion are developed for a perfectly flexible, inelastic tether with a satellite at its extremity. The tether is attached to a space vehicle in orbit. The tether is allowed to possess electrical conductivity. A numerical solution algorithm to provide the motion of the tether and satellite system is presented. The resulting differential equations can be solved by various existing standard numerical integration computer programs. The resulting differential equations allow the introduction of approximations that can lead to analytical, approximate general solutions. The differential equations allow more dynamical insight of the motion.

  2. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    SciTech Connect

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges. In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.

  3. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    DOE PAGESBeta

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less

  4. Analysis of V-cycle multigrid algorithms for forms defined by numerical quadrature

    SciTech Connect

    Bramble, J.H. . Dept. of Mathematics); Goldstein, C.I.; Pasciak, J.E. . Applied Mathematics Dept.)

    1994-05-01

    The authors describe and analyze certain V-cycle multigrid algorithms with forms defined by numerical quadrature applied to the approximation of symmetric second-order elliptic boundary value problems. This approach can be used for the efficient solution of finite element systems resulting from numerical quadrature as well as systems arising from finite difference discretizations. The results are based on a regularity free theory and hence apply to meshes with local grid refinement as well as the quasi-uniform case. It is shown that uniform (independent of the number of levels) convergence rates often hold for appropriately defined V-cycle algorithms with as few as one smoothing per grid. These results hold even on applications without full elliptic regularity, e.g., a domain in R[sup 2] with a crack.

  5. Chemical components determination via terahertz spectroscopic statistical analysis using microgenetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Ma, Yong; Lu, Zheng; Xia, Zhi-Ning; Cheng, Hong

    2011-03-01

    In public security related applications, many suspicious samples may be a mixture of various chemical components that makes the usual spectral analysis difficult. In this paper, a terahertz spectroscopic statistical analysis method using a microgenetic algorithm (Micro-GA) has been proposed. Various chemical components in the mixture can be identified and the concentration of each component can be estimated based on the known spectral data of the pure chemical components. Five chemical mixtures have been tested using Micro-GA. The simulation results have shown agreement with other analytical methods. It is suggested that Micro-GA has potential applications for terahertz spectral identifications of chemical mixtures.

  6. Particle-In-Cell Multi-Algorithm Numerical Test-Bed

    NASA Astrophysics Data System (ADS)

    Meyers, M. D.; Yu, P.; Tableman, A.; Decyk, V. K.; Mori, W. B.

    2015-11-01

    We describe a numerical test-bed that allows for the direct comparison of different numerical simulation schemes using only a single code. It is built from the UPIC Framework, which is a set of codes and modules for constructing parallel PIC codes. In this test-bed code, Maxwell's equations are solved in Fourier space in two dimensions. One can readily examine the numerical properties of a real space finite difference scheme by including its operators' Fourier space representations in the Maxwell solver. The fields can be defined at the same location in a simulation cell or can be offset appropriately by half-cells, as in the Yee finite difference time domain scheme. This allows for the accurate comparison of numerical properties (dispersion relations, numerical stability, etc.) across finite difference schemes, or against the original spectral scheme. We have also included different options for the charge and current deposits, including a strict charge conserving current deposit. The test-bed also includes options for studying the analytic time domain scheme, which eliminates numerical dispersion errors in vacuum. We will show examples from the test-bed that illustrate how the properties of some numerical instabilities vary between different PIC algorithms. Work supported by the NSF grant ACI 1339893 and DOE grant DE-SC0008491.

  7. Numerical advection algorithms and their role in atmospheric transport and chemistry models

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.

    1987-01-01

    During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.

  8. Improved Infomax algorithm of independent component analysis applied to fMRI data

    NASA Astrophysics Data System (ADS)

    Wu, Xia; Yao, Li; Long, Zhi-ying; Wu, Hui

    2004-05-01

    Independent component analysis (ICA) is a technique that attempts to separate data into maximally independent groups. Several ICA algorithms have been proposed in the neural network literature. Among these algorithms applied to fMRI data, the Infomax algorithm has been used more widely so far. The Infomax algorithm maximizes the information transferred in a network of nonlinear units. The nonlinear transfer function is able to pick up higher-order moments of the input distributions and reduce the redundancy between units in the output and input. But the transfer function in the Infomax algorithm is a fixed Logistic function. In this paper, an improved Infomax algorithm is proposed. In order to make transfer function match the input data better, the we add a changeable parameter to the Logistic function and estimate the parameter from the input fMRI data in two methods, 1. maximizing the correlation coefficient between the transfer function and the cumulative distribution function (c.d.f), 2. minimizing the entropy distance based on the KL divergence between the transfer function and the c.d.f. We apply the improved Infomax algorithm to the processing of fMRI data, and the results show that the improved algorithm is more effective in terms of fMRI data separation.

  9. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  10. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  11. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  12. Study on the optimal algorithm prediction of corn leaf component information based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu

    2016-09-01

    Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.

  13. Parametric effects of CFL number and artificial smoothing on numerical solutions using implicit approximate factorization algorithm

    NASA Technical Reports Server (NTRS)

    Daso, E. O.

    1986-01-01

    An implicit approximate factorization algorithm is employed to quantify the parametric effects of Courant number and artificial smoothing on numerical solutions of the unsteady 3-D Euler equations for a windmilling propeller (low speed) flow field. The results show that propeller global or performance chracteristics vary strongly with Courant number and artificial dissipation parameters, though the variation is such less severe at high Courant numbers. Candidate sets of Courant number and dissipation parameters could result in parameter-dependent solutions. Parameter-independent numerical solutions can be obtained if low values of the dissipation parameter-time step ratio are used in the computations. Furthermore, it is realized that too much artificial damping can degrade numerical stability. Finally, it is demonstrated that highly resolved meshes may, in some cases, delay convergence, thereby suggesting some optimum cell size for a given flow solution. It is suspected that improper boundary treatment may account for the cell size constraint.

  14. Coordinate Systems, Numerical Objects and Algorithmic Operations of Computational Experiment in Fluid Mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily

    2016-02-01

    The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.

  15. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method. PMID:15593379

  16. Numerical algorithms for computations of feedback laws arising in control of flexible systems

    NASA Technical Reports Server (NTRS)

    Lasiecka, Irena

    1989-01-01

    Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.

  17. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  18. Numerical stability of relativistic beam multidimensional PIC simulations employing the Esirkepov algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc

    2013-09-01

    Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.

  19. PEGAS: Hydrodynamical code for numerical simulation of the gas components of interacting galaxies

    NASA Astrophysics Data System (ADS)

    Kulikov, Igor

    A new hydrodynamical code for numerical simulation of the gravitational gas dynamics is described in the paper. The code is based on the Fluid-in-Cell method with a Godunov-type scheme at the Eulerian stage. The numerical method was adapted for GPU-based supercomputers. The performance of the code is shown by the simulation of the collision of the gas components of two similar disc galaxies in the course of the central collision of the galaxies in the polar direction.

  20. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization

    PubMed Central

    Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  1. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.

    PubMed

    Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  2. Optimal principal component analysis-based numerical phase aberration compensation method for digital holography.

    PubMed

    Sun, Jiasong; Chen, Qian; Zhang, Yuzhen; Zuo, Chao

    2016-03-15

    In this Letter, an accurate and highly efficient numerical phase aberration compensation method is proposed for digital holographic microscopy. Considering that most parts of the phase aberration resides in the low spatial frequency domain, a Fourier-domain mask is introduced to extract the aberrated frequency components, while rejecting components that are unrelated to the phase aberration estimation. Principal component analysis (PCA) is then performed only on the reduced-sized spectrum, and the aberration terms can be extracted from the first principal component obtained. Finally, by oversampling the reduced-sized aberration terms, the precise phase aberration map is obtained and thus can be compensated by multiplying with its conjugation. Because the phase aberration is estimated from the limited but more relevant raw data, the compensation precision is improved and meanwhile the computation time can be significantly reduced. Experimental results demonstrate that our proposed technique could achieve both high compensating accuracy and robustness compared with other developed compensation methods. PMID:26977692

  3. Comparative Study of Algorithms for the Numerical Simulation of Lattice QCD

    SciTech Connect

    Luz, Fernando H. P.; Mendes, Tereza

    2010-11-12

    Large-scale numerical simulations are the prime method for a nonperturbative study of QCD from first principles. Although the lattice simulation of the pure-gauge (or quenched-QCD) case may be performed very efficiently on parallel machines, there are several additional difficulties in the simulation of the full-QCD case, i.e. when dynamical quark effects are taken into account. We discuss the main aspects of full-QCD simulations, describing the most common algorithms. We present a comparative analysis of performance for two versions of the hybrid Monte Carlo method (the so-called R and RHMC algorithms), as provided in the MILC software package. We consider two degenerate flavors of light quarks in the staggered formulation, having in mind the case of finite-temperature QCD.

  4. A semi-numerical algorithm for instability of compressible multilayered structures

    NASA Astrophysics Data System (ADS)

    Tang, Shan; Yang, Yang; Peng, Xiang He; Liu, Wing Kam; Huang, Xiao Xu; Elkhodary, Khalil

    2015-07-01

    A computational method is proposed for the analysis and prediction of instability (wrinkling or necking) of multilayered compressible plates and sheets made by metals or polymers under plane strain conditions. In previous works, a basic assumption (or a physical argument) that has been frequently made is that materials are incompressible to simplify mathematical derivations. To account for the compressibility of metals and polymers (the lower Poisson's ratio leads to the more compressible material), we propose a combined semi-numerical algorithm and finite element method for instability analysis. Our proposed algorithm is herein verified by comparing its predictions with published results in literature for thin films with polymer/metal substrates and for polymer/metal systems. The new combined method is then used to predict the effects of compressibility on instability behaviors. Results suggest potential utility for compressibility in the design of multilayered structures.

  5. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

    PubMed Central

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-01-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

  6. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  7. New Concepts in Breast Cancer Emerge from Analyzing Clinical Data Using Numerical Algorithms

    PubMed Central

    Retsky, Michael

    2009-01-01

    A small international group has recently challenged fundamental concepts in breast cancer. As a guiding principle in therapy, it has long been assumed that breast cancer growth is continuous. However, this group suggests tumor growth commonly includes extended periods of quasi-stable dormancy. Furthermore, surgery to remove the primary tumor often awakens distant dormant micrometastases. Accordingly, over half of all relapses in breast cancer are accelerated in this manner. This paper describes how a numerical algorithm was used to come to these conclusions. Based on these findings, a dormancy preservation therapy is proposed. PMID:19440287

  8. Numerical arc segmentation algorithm for a radio conference - A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    A detailed description of a Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software package for communication satellite systems planning is presented. This software provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC - 88) on the use of the GEO and the planning of space services utilizing GEO. The features of the NASARC software package are described, and detailed information is given about the function of each of the four NASARC program modules. The results of a sample world scenario are presented and discussed.

  9. Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm

    NASA Technical Reports Server (NTRS)

    Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.

    1990-01-01

    Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  10. Shock focusing flow field simulated by a high-resolution numerical algorithm

    NASA Astrophysics Data System (ADS)

    Jung, Y. G.; Chang, K. S.

    2012-11-01

    Shock-focusing concave reflector is a very simple and effective tool to obtain a high-pressure pulse wave near the physical focal point. In the past, many optical images were obtained through experimental studies. However, measurement of field variables is not easy because the phenomenon is of short duration and the magnitude of shock waves is varied from pulse to pulse due to poor reproducibility. Using a wave propagation algorithm and the Cartesian embedded boundary method, we have successfully obtained numerical schlieren images that resemble the experimental results. By the numerical results, various field variables, such as pressure, density and vorticity, become available for the better understanding and design of shock focusing devices.

  11. Highly efficient numerical algorithm based on random trees for accelerating parallel Vlasov-Poisson simulations

    NASA Astrophysics Data System (ADS)

    Acebrón, Juan A.; Rodríguez-Rozas, Ángel

    2013-10-01

    An efficient numerical method based on a probabilistic representation for the Vlasov-Poisson system of equations in the Fourier space has been derived. This has been done theoretically for arbitrary dimensional problems, and particularized to unidimensional problems for numerical purposes. Such a representation has been validated theoretically in the linear regime comparing the solution obtained with the classical results of the linear Landau damping. The numerical strategy followed requires generating suitable random trees combined with a Padé approximant for approximating accurately a given divergent series. Such series are obtained by summing the partial contributions to the solution coming from trees with arbitrary number of branches. These contributions, coming in general from multi-dimensional definite integrals, are efficiently computed by a quasi-Monte Carlo method. It is shown how the accuracy of the method can be effectively increased by considering more terms of the series. The new representation was used successfully to develop a Probabilistic Domain Decomposition method suited for massively parallel computers, which improves the scalability found in classical methods. Finally, a few numerical examples based on classical phenomena such as the non-linear Landau damping, and the two streaming instability are given, illustrating the remarkable performance of the algorithm, when compared the results with those obtained using a classical method.

  12. A Novel Algorithm for Independent Component Analysis with Reference and Methods for Its Applications

    PubMed Central

    Mi, Jian-Xun

    2014-01-01

    This paper presents a stable and fast algorithm for independent component analysis with reference (ICA-R). This is a technique for incorporating available reference signals into the ICA contrast function so as to form an augmented Lagrangian function under the framework of constrained ICA (cICA). The previous ICA-R algorithm was constructed by solving the optimization problem via a Newton-like learning style. Unfortunately, the slow convergence and potential misconvergence limit the capability of ICA-R. This paper first investigates and probes the flaws of the previous algorithm and then introduces a new stable algorithm with a faster convergence speed. There are two other highlights in this paper: first, new approaches, including the reference deflation technique and a direct way of obtaining references, are introduced to facilitate the application of ICA-R; second, a new method is proposed that the new ICA-R is used to recover the complete underlying sources with new advantages compared with other classical ICA methods. Finally, the experiments on both synthetic and real-world data verify the better performance of the new algorithm over both previous ICA-R and other well-known methods. PMID:24826986

  13. Thermodynamically Consistent Physical Formulation and an Efficient Numerical Algorithm for Incompressible N-Phase Flows

    NASA Astrophysics Data System (ADS)

    Dong, Suchuan

    2015-11-01

    This talk focuses on simulating the motion of a mixture of N (N>=2) immiscible incompressible fluids with given densities, dynamic viscosities and pairwise surface tensions. We present an N-phase formulation within the phase field framework that is thermodynamically consistent, in the sense that the formulation satisfies the conservations of mass/momentum, the second law of thermodynamics and Galilean invariance. We also present an efficient algorithm for numerically simulating the N-phase system. The algorithm has overcome the issues caused by the variable coefficient matrices associated with the variable mixture density/viscosity and the couplings among the (N-1) phase field variables and the flow variables. We compare simulation results with the Langmuir-de Gennes theory to demonstrate that the presented method produces physically accurate results for multiple fluid phases. Numerical experiments will be presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts to demonstrate the capabilities of the method for studying the interactions among multiple types of fluid interfaces. Support from NSF and ONR is gratefully acknowledged.

  14. Numerical investigation of spray ignition of a multi-component fuel surrogate

    NASA Astrophysics Data System (ADS)

    Backer, Lara; Narayanaswamy, Krithika; Pepiot, Perrine

    2014-11-01

    Simulating turbulent spray ignition, an important process in engine combustion, is challenging, since it combines the complexity of multi-scale, multiphase turbulent flow modeling with the need for an accurate description of chemical kinetics. In this work, we use direct numerical simulation to investigate the role of the evaporation model on the ignition characteristics of a multi-component fuel surrogate, injected as droplets in a turbulent environment. The fuel is represented as a mixture of several components, each one being representative of a different chemical class. A reduced kinetic scheme for the mixture is extracted from a well-validated detailed chemical mechanism, and integrated into the multiphase turbulent reactive flow solver NGA. Comparisons are made between a single-component evaporation model, in which the evaporating gas has the same composition as the liquid droplet, and a multi-component model, where component segregation does occur. In particular, the corresponding production of radical species, which are characteristic of the ignition of individual fuel components, is thoroughly analyzed.

  15. A hybrid least squares and principal component analysis algorithm for Raman spectroscopy.

    PubMed

    Van de Sompel, Dominique; Garai, Ellis; Zavaleta, Cristina; Gambhir, Sanjiv Sam

    2012-01-01

    Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest) and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm's superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles. PMID:22723895

  16. Fast numerical algorithms for fitting multiresolution hybrid shape models to brain MRI.

    PubMed

    Vemuri, B C; Guo, Y; Lai, S H; Leonard, C M

    1997-09-01

    In this paper, we present new and fast numerical algorithms for shape recovery from brain MRI using multiresolution hybrid shape models. In this modeling framework, shapes are represented by a core rigid shape characterized by a superquadric function and a superimposed displacement function which is characterized by a membrane spline discretized using the finite-element method. Fitting the model to brain MRI data is cast as an energy minimization problem which is solved numerically. We present three new computational methods for model fitting to data. These methods involve novel mathematical derivations that lead to efficient numerical solutions of the model fitting problem. The first method involves using the nonlinear conjugate gradient technique with a diagonal Hessian preconditioner. The second method involves the nonlinear conjugate gradient in the outer loop for solving global parameters of the model and a preconditioned conjugate gradient scheme for solving the local parameters of the model. The third method involves the nonlinear conjugate gradient in the outer loop for solving the global parameters and a combination of the Schur complement formula and the alternating direction-implicit method for solving the local parameters of the model. We demonstrate the efficiency of our model fitting methods via experiments on several MR brain scans. PMID:9873915

  17. Quantitative performance evaluation of a blurring restoration algorithm based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Greco, Mario; Huebner, Claudia; Marchi, Gabriele

    2008-10-01

    In the field on blind image deconvolution a new promising algorithm, based on the Principal Component Analysis (PCA), has been recently proposed in the literature. The main advantages of the algorithm are the following: computational complexity is generally lower than other deconvolution techniques (e.g., the widely used Iterative Blind Deconvolution - IBD - method); it is robust to white noise; only the blurring point spread function support is required to perform the single-observation deconvolution (i.e., a single degraded observation of a scene is available), while the multiple-observation one is completely unsupervised (i.e., multiple degraded observations of a scene are available). The effectiveness of the PCA-based restoration algorithm has been only confirmed by visual inspection and, to the best of our knowledge, no objective image quality assessment has been performed. In this paper a generalization of the original algorithm version is proposed; then the previous unexplored issue is considered and the achieved results are compared with that of the IBD method, which is used as benchmark.

  18. Numerical study of 1-D, 3-vector component, thermally-conductive MHD solar wind

    NASA Technical Reports Server (NTRS)

    Han, S.; Wu, S. T.; Dryer, M.

    1993-01-01

    In the present study, transient, 1-dimensional, 3-vector component MHD equations are used to simulate steady and unsteady, thermally conductive MHD solar wind expansions between the solar surface and 1 AU (astronomical unit). A variant of SIMPLE numerical method was used to integrate the equations. Steady state solar wind properties exhibit qualitatively similar behavior with the known Weber-Davies Solutions. Generation of Alfven shock, in addition to the slow and fast MHD shocks, was attempted by the boundary perturbations at the solar surface. Property changes through the disturbance were positively correlated with the fast and slow MHD shocks. Alfven shock was, however, not present in the present simulations.

  19. Finite element modeling and numerical simulation of sintered tungsten components under hydrogen atmosphere

    NASA Astrophysics Data System (ADS)

    Mamen, B.; Song, J.; Barriere, T.; Gelin, J.-C.

    2013-05-01

    Powder injection molding (PIM) is a suitable technology for manufacturing of complex shapes with tungsten powders and has a great potential in many applications. Sintering is one of the most important steps in Powder Injection Molding process. The sintering behaviour of tungsten injection moulded components, under pure hydrogen atmosphere at temperature up to 1700°C using fine 0.4μm and coarse powders 7.0 μm, is investigated by means of the beam bending and dilatometric tests in the Setaram{copyright, serif} analyser. To simulate the shrinkage and shape distortion of tungsten injection moulded components during the sintering process using finite element methods, viscoplastic constitutive law is implemented in ABAQUS software as user subroutine UMAT and incorporated with the identified parameters. Comparison between the numerical simulations results and experimental ones, in term of shrinkages and sintered densities, shows good agreement between the two.

  20. Bearing fault component identification using information gain and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Vinay, Vakharia; Kumar, Gupta Vijay; Kumar, Kankar Pavan

    2015-04-01

    In the present study an attempt has been made to identify various bearing faults using machine learning algorithm. Vibration signals obtained from faults in inner race, outer race, rolling element and combined faults are considered. Raw vibration signal cannot be used directly since vibration signals are masked by noise. To overcome this difficulty combined time frequency domain method such as wavelet transform is used. Further wavelet selection criteria based on minimum permutation entropy is employed to select most appropriate base wavelet. Statistical features from selected wavelet coefficients are calculated to form feature vector. To reduce size of feature vector information gain attribute selection method is employed. Modified feature set is fed in to machine learning algorithm such as random forest and self-organizing map for getting maximize fault identification efficiency. Results obtained revealed that attribute selection method shows improvement in fault identification accuracy of bearing components.

  1. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    NASA Astrophysics Data System (ADS)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate

  2. International Symposium on Computational Electronics—Physical Modeling, Mathematical Theory, and Numerical Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yiming

    2007-12-01

    This symposium is an open forum for discussion on the current trends and future directions of physical modeling, mathematical theory, and numerical algorithm in electrical and electronic engineering. The goal is for computational scientists and engineers, computer scientists, applied mathematicians, physicists, and researchers to present their recent advances and exchange experience. We welcome contributions from researchers of academia and industry. All papers to be presented in this symposium have carefully been reviewed and selected. They include semiconductor devices, circuit theory, statistical signal processing, design optimization, network design, intelligent transportation system, and wireless communication. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!

  3. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    SciTech Connect

    Dong, S.

    2015-02-15

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  4. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  5. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  6. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  7. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  8. Deconvolution of complex spectra into components by the bee swarm algorithm

    NASA Astrophysics Data System (ADS)

    Yagfarov, R. R.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.; Salakhov, M. Kh

    2016-05-01

    The bee swarm algorithm is adapted for the solution of the problem of deconvolution of complex spectral contours into components. Comparison of biological concepts relating to the behaviour of bees in a colony and mathematical concepts relating to the quality of the obtained solutions is carried out (mean square error, random solutions in the each iteration). Model experiments, which have been realized on the example of a signal representing a sum of three Lorentz contours of various intensity and half-width, confirm the efficiency of the offered approach.

  9. A numerical study of non-collinear wave mixing and generated resonant components.

    PubMed

    Sun, Zhenghao; Li, Fucai; Li, Hongguang

    2016-09-01

    Interaction of two non-collinear nonlinear ultrasonic waves in an elastic half-space with quadratic nonlinearity is investigated in this paper. A hyperbolic system of conservation laws is applied here and a semi-discrete central scheme is used to solve the numerical problem. The numerical results validate that the model can be used as an effective method to generate and evaluate a resonant wave when two primary waves mix together under certain resonant conditions. Features of the resonant wave are analyzed both in the time and frequency domains, and variation trends of the resonant waves together with second harmonics along the propagation path are analyzed. Applied with the pulse-inversion technique, components of resonant waves and second harmonics can be independently extracted and observed without distinguishing times of flight. The results show that under the circumstance of non-collinear wave mixing, both sum and difference resonant components can be clearly obtained especially in the tangential direction of their propagation. For several rays of observation points around the interaction zone, the further it is away from the excitation sources, generally the earlier the maximum of amplitude arises. From the parametric analysis of the phased array, it is found that both the length of array and the density of element have impact on the maximum of amplitude of the resonant waves. The spatial distribution of resonant waves will provide necessary information for the related experiments. PMID:27403643

  10. Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    NASA Astrophysics Data System (ADS)

    Kitaura, F. S.; Enßlin, T. A.

    2008-09-01

    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.

  11. Study on sub-cycling algorithm for flexible multi-body system: stability analysis and numerical examples

    NASA Astrophysics Data System (ADS)

    Miao, J. C.; Zhu, P.; Shi, G. L.; Chen, G. L.

    2008-01-01

    Numerical stability is an important issue for any integral procedure. Since sub-cycling algorithm has been presented by Belytschko et al. (Comput Methods Appl Mech Eng 17/18: 259-275, 1979), various kinds of these integral procedures were developed in later 20 years and their stability were widely studied. However, on how to apply the sub-cycling to flexible multi-body dynamics (FMD) is still a lack of investigation up to now. A particular sub-cycling algorithm for the FMD based on the central difference method was introduced in detail in part I (Miao et al. in Comp Mech doi: 10.1007/s00466-007-0183-9) of this paper. Adopting an integral approximation operator method, stability of the presented algorithm is transformed to a generalized eigenvalue problem in the paper and is discussed by solving the problem later. Numerical examples are performed to verify the availability and efficiency of the algorithm further.

  12. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  13. A multi-component two-phase flow algorithm for use in landfill processes modelling.

    PubMed

    White, J K; Nayagum, D; Beaven, R P

    2014-09-01

    This paper describes the finite difference algorithm that has been developed for the flow sub-model of the University of Southampton landfill degradation and transport model LDAT. The liquid and gas phase flow components are first decoupled from the solid phase of the full multi-phase, multi-component landfill process constitutive equations and are then rearranged into a format that can be applied as a calculation procedure within the framework of a three dimensional array of finite difference rectangular elements. The algorithm contains a source term which accommodates the non-flow landfill processes of degradation, gas solubility, and leachate chemical equilibrium, sub-models that have been described in White and Beaven (2013). The paper includes an illustration of the application of the flow sub-model in the context of the leachate recirculation tests carried out at the Beddington landfill project. This illustration demonstrates the ability of the sub-model to track movement in the gas phase as well as the liquid phase, and to simulate multi-directional flow patterns that are different in each of the phases. PMID:24925875

  14. A Simple Algorithm for Finding All k-Edge-Connected Components

    PubMed Central

    Wang, Tianhao; Zhang, Yong; Chin, Francis Y. L.; Ting, Hing-Fung; Tsin, Yung H.; Poon, Sheung-Hung

    2015-01-01

    The problem of finding k-edge-connected components is a fundamental problem in computer science. Given a graph G = (V, E), the problem is to partition the vertex set V into {V1, V2,…, Vh}, where each Vi is maximized, such that for any two vertices x and y in Vi, there are k edge-disjoint paths connecting them. In this paper, we present an algorithm to solve this problem for all k. The algorithm preprocesses the input graph to construct an Auxiliary Graph to store information concerning edge-connectivity among every vertex pair in O(Fn) time, where F is the time complexity to find the maximum flow between two vertices in graph G and n = ∣V∣. For any value of k, the k-edge-connected components can then be determined by traversing the auxiliary graph in O(n) time. The input graph can be a directed or undirected, simple graph or multigraph. Previous works on this problem mainly focus on fixed value of k. PMID:26368134

  15. A new blind fault component separation algorithm for a single-channel mechanical signal mixture

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2012-10-01

    A vibration signal collected from a complex machine consists of multiple vibration components, which are system responses excited by several sources. This paper reports a new blind component separation (BCS) method for extracting different mechanical fault features. By applying the proposed method, a single-channel mixed signal can be decomposed into two parts: the periodic and transient subsets. The periodic subset is related to the imbalance, misalignment and eccentricity of a machine. The transient subset refers to abnormal impulsive phenomena, such as those caused by localized bearing faults. The proposed method includes two individual strategies to deal with these different characteristics. The first extracts the sub-Gaussian periodic signal by minimizing the kurtosis of the equalized signals. The second detects the super-Gaussian transient signal by minimizing the smoothness index of the equalized signals. Here, the equalized signals are derived by an eigenvector algorithm that is a successful solution to the blind equalization problem. To reduce the computing time needed to select the equalizer length, a simple optimization method is introduced to minimize the kurtosis and smoothness index, respectively. Finally, simulated multiple-fault signals and a real multiple-fault signal collected from an industrial machine are used to validate the proposed method. The results show that the proposed method is able to effectively decompose the multiple-fault vibration mixture into periodic components and random non-stationary transient components. In addition, the equalizer length can be intelligently determined using the proposed method.

  16. Towards the optimal design of an uncemented acetabular component using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Ghosh, Rajesh; Pratihar, Dilip Kumar; Gupta, Sanjay

    2015-12-01

    Aseptic loosening of the acetabular component (hemispherical socket of the pelvic bone) has been mainly attributed to bone resorption and excessive generation of wear particle debris. The aim of this study was to determine optimal design parameters for the acetabular component that would minimize bone resorption and volumetric wear. Three-dimensional finite element models of intact and implanted pelvises were developed using data from computed tomography scans. A multi-objective optimization problem was formulated and solved using a genetic algorithm. A combination of suitable implant material and corresponding set of optimal thicknesses of the component was obtained from the Pareto-optimal front of solutions. The ultra-high-molecular-weight polyethylene (UHMWPE) component generated considerably greater volumetric wear but lower bone density loss compared to carbon-fibre reinforced polyetheretherketone (CFR-PEEK) and ceramic. CFR-PEEK was located in the range between ceramic and UHMWPE. Although ceramic appeared to be a viable alternative to cobalt-chromium-molybdenum alloy, CFR-PEEK seems to be the most promising alternative material.

  17. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  18. A modeling and numerical algorithm for thermoporomechanics in multiple porosity media for naturally fractured reservoirs

    NASA Astrophysics Data System (ADS)

    Kim, J.; Sonnenthal, E. L.; Rutqvist, J.

    2011-12-01

    Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator

  19. A microkernel design for component-based parallel numerical software systems.

    SciTech Connect

    Balay, S.

    1999-01-13

    What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objects share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.

  20. CCARES: A computer algorithm for the reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Gyekenyesi, John P.

    1993-01-01

    Structural components produced from laminated CMC (ceramic matrix composite) materials are being considered for a broad range of aerospace applications that include various structural components for the national aerospace plane, the space shuttle main engine, and advanced gas turbines. Specifically, these applications include segmented engine liners, small missile engine turbine rotors, and exhaust nozzles. Use of these materials allows for improvements in fuel efficiency due to increased engine temperatures and pressures, which in turn generate more power and thrust. Furthermore, this class of materials offers significant potential for raising the thrust-to-weight ratio of gas turbine engines by tailoring directions of high specific reliability. The emerging composite systems, particularly those with silicon nitride or silicon carbide matrix, can compete with metals in many demanding applications. Laminated CMC prototypes have already demonstrated functional capabilities at temperatures approaching 1400 C, which is well beyond the operational limits of most metallic materials. Laminated CMC material systems have several mechanical characteristics which must be carefully considered in the design process. Test bed software programs are needed that incorporate stochastic design concepts that are user friendly, computationally efficient, and have flexible architectures that readily incorporate changes in design philosophy. The CCARES (Composite Ceramics Analysis and Reliability Evaluation of Structures) program is representative of an effort to fill this need. CCARES is a public domain computer algorithm, coupled to a general purpose finite element program, which predicts the fast fracture reliability of a structural component under multiaxial loading conditions.

  1. Biphasic indentation of articular cartilage--II. A numerical algorithm and an experimental study.

    PubMed

    Mow, V C; Gibbs, M C; Lai, W M; Zhu, W B; Athanasiou, K A

    1989-01-01

    Part I (Mak et al., 1987, J. Biomechanics 20, 703-714) presented the theoretical solutions for the biphasic indentation of articular cartilage under creep and stress-relaxation conditions. In this study, using the creep solution, we developed an efficient numerical algorithm to compute all three material coefficients of cartilage in situ on the joint surface from the indentation creep experiment. With this method we determined the average values of the aggregate modulus. Poisson's ratio and permeability for young bovine femoral condylar cartilage in situ to be HA = 0.90 MPa, vs = 0.39 and k = 0.44 x 10(-15) m4/Ns respectively, and those for patellar groove cartilage to be HA = 0.47 MPa, vs = 0.24, k = 1.42 x 10(-15) m4/Ns. One surprising finding from this study is that the in situ Poisson's ratio of cartilage (0.13-0.45) may be much less than those determined from measurements performed on excised osteochondral plugs (0.40-0.49) reported in the literature. We also found the permeability of patellar groove cartilage to be several times higher than femoral condyle cartilage. These findings may have important implications on understanding the functional behavior of cartilage in situ and on methods used to determine the elastic moduli of cartilage using the indentation experiments. PMID:2613721

  2. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  3. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    ERIC Educational Resources Information Center

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  4. Numerical simulation of steady and unsteady viscous flow in turbomachinery using pressure based algorithm

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, B.; Ho, Y.; Basson, A.

    1993-07-01

    The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake

  5. Numerical simulation of steady and unsteady viscous flow in turbomachinery using pressure based algorithm

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Ho, Y.; Basson, A.

    1993-01-01

    The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake

  6. Four-component numerical simulation model of radiative convective interactions in large-scale oxygen-hydrogen turbulent fire balls

    SciTech Connect

    Surzhikov, S.T.

    1996-12-31

    Two-dimensional radiative gas dynamics model for numerical simulation of oxygen-hydrogen fire ball which may be generated by an explosion of a launch vehicle with cryogenic (LO{sub 2}-LH{sub 2}) fuel components is presented. The following physical-chemical processes are taken into account in the numerical model: and effective chemical reaction between the gaseous components (O{sub 2}-H{sub 2}) of the propellant, turbulent mixing and diffusion of the components, and radiative heat transfer. The results of numerical investigations of the following problems are presented: The influence of radiative heat transfer on fire ball gas dynamics during the first 13 sec after explosion, the effect of the fuel gaseous components afterburning on fire ball gas dynamics, and the effect of turbulence on fire ball gas dynamics (in a framework of algebraic model of turbulent mixing).

  7. Detailed numerical simulation of shock-body interaction in 3D multicomponent flow using the RKDG numerical method and ”DiamondTorre” GPU algorithm of implementation

    NASA Astrophysics Data System (ADS)

    Korneev, Boris; Levchenko, Vadim

    2016-02-01

    Interaction between a shock wave and an inhomogeneity in fluid has complicated behavior, including vortex and turbulence generating, mixing, shock wave scattering and reflection. In the present paper we deal with the numerical simulation of the considered process. The Euler equations of unsteady inviscid compressible three-dimensional flow are used into the four-equation model of multicomponent flow. These equations are discretized using the RKDG numerical method. It is implemented with the help of the DiamondTorre algorithm, so the effective GPGPU solver is obtained having outstanding computing properties. With its use we carry out several sets of numerical experiments of shock-bubble interaction problem. The bubble deformation and mixture formation is observed.

  8. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-10-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  9. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-09-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  10. An Adaptive Numeric Predictor-corrector Guidance Algorithm for Atmospheric Entry Vehicles. M.S. Thesis - MIT, Cambridge

    NASA Technical Reports Server (NTRS)

    Spratlin, Kenneth Milton

    1987-01-01

    An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

  11. Essential Oil of Artemisia annua L.: An Extraordinary Component with Numerous Antimicrobial Properties

    PubMed Central

    Bilia, Anna Rita; Sacco, Cristiana; Bergonzi, Maria Camilla; Donato, Rosa

    2014-01-01

    Artemisia annua L. (Asteraceae) is native to China, now naturalised in many other countries, well known as the source of the unique sesquiterpene endoperoxide lactone artemisinin, and used in the treatment of the chloroquine-resistant and cerebral malaria. The essential oil is rich in mono- and sesquiterpenes and represents a by-product with medicinal properties. Besides significant variations in its percentage and composition have been reported (major constituents can be camphor (up to 48%), germacrene D (up to 18.9%), artemisia ketone (up to 68%), and 1,8 cineole (up to 51.5%)), the oil has been subjected to numerous studies supporting exciting antibacterial and antifungal activities. Both gram-positive bacteria (Enterococcus, Streptococcus, Staphylococcus, Bacillus, and Listeria spp.), and gram-negative bacteria (Escherichia, Shigella, Salmonella, Haemophilus, Klebsiella, and Pseudomonas spp.) and other microorganisms (Candida, Saccharomyces, and Aspergillus spp.) have been investigated. However, the experimental studies performed to date used different methods and diverse microorganisms; as a consequence, a comparative analysis on a quantitative basis is very difficult. The aim of this review is to sum up data on antimicrobial activity of A. annua essential oil and its major components to facilitate future approach of microbiological studies in this field. PMID:24799936

  12. A flexible numerical component to simulate surface runoff transport and biogeochemical processes through dense vegetation

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Perez-Ovilla, O.

    2012-12-01

    Methods to estimate surface runoff pollutant removal using dense vegetation buffers (i.e. vegetative filter strips) usually consider a limited number of factors (i.e. filter length, slope) and are in general based on empirical relationships. When an empirical approach is used, the application of the model is limited to those conditions of the data used for the regression equations. The objective of this work is to provide a flexible numerical mechanistic tool to simulate dynamics of a wide range of surface runoff pollutants through dense vegetation and their physical, chemical and biological interactions based on equations defined by the user as part of the model inputs. A flexible water quality model based on the Reaction Simulation Engine (RSE) modeling component is coupled to a transport module based on the traditional Bubnov -Galerkin finite element method to solve the advection-dispersion-reaction equation using the alternating split-operator technique. This coupled transport-reaction model is linked to the VFSMOD-W (http://abe.ufl.edu/carpena/vfsmod) program to mechanistically simulate mobile and stabile pollutants through dense vegetation based on user-defined conceptual models (differential equations written in XML language as input files). The key factors to consider in the creation of a conceptual model are the components in the buffer (i.e. vegetation, soil, sediments) and how the pollutant interacts with them. The biogeochemical reaction component was tested successfully with laboratory and field scale experiments. One of the major advantages when using this tool is that the pollutant transport and removal thought dense vegetation is related to physical and biogeochemical process occurring within the filter. This mechanistic approach increases the range of use of the model to a wide range of pollutants and conditions without modification of the core model. The strength of the model relies on the mechanistic approach used for simulating the removal of

  13. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm.

    PubMed

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-10-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset. PMID:25426433

  14. Development of a new signal processing algorithm based on independent component analysis for single channel ECG data.

    PubMed

    Lee, J; Lee, K J; Yoo, S K

    2004-01-01

    In this paper, we proposed a new signal processing algorithm based on independent component analysis (ICA) for single channel ECG data. For the application ICA to single channel data, mixed (multi-channel) signals are constructed by adding some delay to original data. By ICA, signal enhancement is acquired. For validation of usefulness of this signal, QRS complex detection was accompanied. In QRS detection process, Hilbert transform and wavelet transform were used and good QRS detection efficacy was obtained. Furthermore, a signal, which could not be filtered properly using existing algorithm, also had better signal enhancement. In future, we need to study on the algorithm optimization and simplification. PMID:17271650

  15. A Bayesian Approach to Estimating Coupling Between Neural Components: Evaluation of the Multiple Component, Event-Related Potential (mcERP) Algorithm

    NASA Technical Reports Server (NTRS)

    Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.

  16. Numerical Study of Three-Dimensional Flows Using Unfactored Upwind-Relaxation Sweeping Algorithm

    NASA Astrophysics Data System (ADS)

    Zha, G.-C.; Bilgen, E.

    1996-05-01

    The linear stability analysis of the unfactored upwind relaxation-sweeping (URS) algorithm for 3D flow field calculations has been carried out and it is shown that the URS algorithm is unconditionally stable. The algorithm is independent of the global sweeping direction selection. However, choosing the direction with relatively low variable gradient as the global sweeping direction results in a higher degree of stability. Three-dimensional compressible Euler equations are solved by using the implicit URS algorithm to study internal flows of a non-axisymmetric nozzle with a circular-to-rectangular transition duct and complex shock wave structures for a 3D channel flow. The efficiency and robustness of the URS algorithm has been demonstrated.

  17. Study of Groundwater Resources Components in the North China Plain based on Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Shao, J.

    2015-12-01

    Over-exploitation of groundwater and induced environmental problems in the North China Plain (NCP) has drawn more and more concerns. Here, we chose three typical hydrogeological units in the NCP, which are Hutuo River alluvial fan (HR), the Tianjin Plain in the central alluvial fan (TJ), and the Yellow river aquifer system (YR). Relying on groundwater numerical models through MODFLOW, the water balances were calculated and analyzed accordingly, especially for quantifying individual recharge and discharge contributing terms. Specifically, (1) In the HR, both natural steady-state flow and transient flow models under human activities were implemented. Results indicated groundwater level decreased by around 40m with extensive exploitation, where the total recharge rate, discharge rate, and over-exploitation rate were calculated. (2) In the TJ, groundwater and land subsidence coupled model was established, where the maximum subsidence rate and decrease of groundwater level was estimated. (3) In the YR, the exploitation rate of the groundwater and recharge rate of the aquifer by the Yellow River were calculated. We found that there are big differences among the components of groundwater recharge of the three typical hydrogeological units. Human activities have a clear effect on the recharge and discharge processes. Thus, rational development and protection policies should be issued. In the piedmont alluvial fan, the groundwater was severely over-exploited. Therefore, reduction of groundwater exploitation and groundwater artificial recharge are needed to get the recharge and discharge balanced. In the middle alluvial fan of the NCP, the confined aquifer has been over-exploited and has resulted in regional land subsidence. It suggests the withdrawal of confined aquifer should be strictly limited, especially at the place where alternative water resources are accessible. In the hydrogeological unit of the YR, the groundwater storage is potentially large for exploitation.

  18. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast.

    PubMed

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M

    2016-04-21

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented. PMID:27025665

  19. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast

    NASA Astrophysics Data System (ADS)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.

    2016-04-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  20. Novel materials, fabrication techniques and algorithms for microwave and THz components, systems and applications

    NASA Astrophysics Data System (ADS)

    Liang, Min

    This dissertation presents the investigation of several additive manufactured components in RF and THz frequency, as well as the applications of gradient index lens based direction of arrival (DOA) estimation system and broadband electronically beam scanning system. Also, a polymer matrix composite method to achieve artificially controlled effective dielectric properties for 3D printing material is studied. Moreover, the characterization of carbon based nano-materials at microwave and THz frequency, photoconductive antenna array based Terahertz time-domain spectroscopy (THz-TDS) near field imaging system, and a compressive sensing based microwave imaging system is discussed in this dissertation. First, the design, fabrication and characterization of several 3D printed components in microwave and THz frequency are presented. These components include 3D printed broadband Luneburg lens, 3D printed patch antenna, 3D printed multilayer microstrip line structure with vertical transition, THz all-dielectric EMXT waveguide to planar microstrip transition structure and 3D printed dielectric reflectarrays. Second, the additive manufactured 3D Luneburg Lens is employed for DOA estimation application. Using the special property of a Luneburg lens that every point on the surface of the Lens is the focal point of a plane wave incident from the opposite side, 36 detectors are mounted around the surface of the lens to estimate the direction of arrival (DOA) of a microwave signal. The direction finding results using a correlation algorithm show that the averaged error is smaller than 1º for all 360 degree incident angles. Third, a novel broadband electronic scanning system based on Luneburg lens phased array structure is reported. The radiation elements of the phased array are mounted around the surface of a Luneburg lens. By controlling the phase and amplitude of only a few adjacent elements, electronic beam scanning with various radiation patterns can be easily achieved

  1. NUMERICAL ALGORITHMS AT NON-ZERO CHEMICAL POTENTIAL. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP, VOLUME 19

    SciTech Connect

    BLUM,T.

    1999-09-14

    The RIKEN BNL Research Center hosted its 19th workshop April 27th through May 1, 1999. The topic was Numerical Algorithms at Non-Zero Chemical Potential. QCD at a non-zero chemical potential (non-zero density) poses a long-standing unsolved challenge for lattice gauge theory. Indeed, it is the primary unresolved issue in the fundamental formulation of lattice gauge theory. The chemical potential renders conventional lattice actions complex, practically excluding the usual Monte Carlo techniques which rely on a positive definite measure for the partition function. This ''sign'' problem appears in a wide range of physical systems, ranging from strongly coupled electronic systems to QCD. The lack of a viable numerical technique at non-zero density is particularly acute since new exotic ''color superconducting'' phases of quark matter have recently been predicted in model calculations. A first principles confirmation of the phase diagram is desirable since experimental verification is not expected soon. At the workshop several proposals for new algorithms were made: cluster algorithms, direct simulation of Grassman variables, and a bosonization of the fermion determinant. All generated considerable discussion and seem worthy of continued investigation. Several interesting results using conventional algorithms were also presented: condensates in four fermion models, SU(2) gauge theory in fundamental and adjoint representations, and lessons learned from strong; coupling, non-zero temperature and heavy quarks applied to non-zero density simulations.

  2. Application of two oriented partial differential equation filtering models on speckle fringes with poor quality and their numerically fast algorithms.

    PubMed

    Zhu, Xinjun; Chen, Zhanqing; Tang, Chen; Mi, Qinghua; Yan, Xiusheng

    2013-03-20

    In this paper, we are concerned with denoising in experimentally obtained electronic speckle pattern interferometry (ESPI) speckle fringe patterns with poor quality. We extend the application of two existing oriented partial differential equation (PDE) filters, including the second-order single oriented PDE filter and the double oriented PDE filter, to two experimentally obtained ESPI speckle fringe patterns with very poor quality, and compare them with other efficient filtering methods, including the adaptive weighted filter, the improved nonlinear complex diffusion PDE, and the windowed Fourier transform method. All of the five filters have been illustrated to be efficient denoising methods through previous comparative analyses in published papers. The experimental results have demonstrated that the two oriented PDE models are applicable to low-quality ESPI speckle fringe patterns. Then for solving the main shortcoming of the two oriented PDE models, we develop the numerically fast algorithms based on Gauss-Seidel strategy for the two oriented PDE models. The proposed numerical algorithms are capable of accelerating the convergence greatly, and perform significantly better in terms of computational efficiency. Our numerically fast algorithms are extended automatically to some other PDE filtering models. PMID:23518722

  3. Interim Progress Report on the Application of an Independent Components Analysis-based Spectral Unmixing Algorithm to Beowulf Computers

    USGS Publications Warehouse

    Lemeshewsky, George

    2003-01-01

    This report describes work done to implement an independent-components-analysis (ICA) -based blind unmixing algorithm on the Eastern Region Geography (ERG) Beowulf computer cluster. It gives a brief description of blind spectral unmixing using ICA-based techniques and a preliminary example of unmixing results for Landsat-7 Thematic Mapper multispectral imagery using a recently reported1,2,3 unmixing algorithm. Also included are computer performance data. The final phase of this work, the actual implementation of the unmixing algorithm on the Beowulf cluster, was not completed this fiscal year and is addressed elsewhere. It is noted that study of this algorithm and its application to land-cover mapping will continue under another research project in the Land Remote Sensing theme into fiscal year 2004.

  4. Essentially entangled component of multipartite mixed quantum states, its properties, and an efficient algorithm for its extraction

    NASA Astrophysics Data System (ADS)

    Akulin, V. M.; Kabatiansky, G. A.; Mandilara, A.

    2015-10-01

    Using geometric means, we first consider a density matrix decomposition of a multipartite quantum system of a finite dimension into two density matrices: a separable one, also known as the best separable approximation, and an essentially entangled one, which contains no product state components. We show that this convex decomposition can be achieved in practice with the help of a linear programming algorithm that scales in the general case polynomially with the system dimension. We illustrate the algorithm implementation with an example of a composite system of dimension 12 that undergoes a loss of coherence due to classical noise and we trace the time evolution of its essentially entangled component. We suggest a "geometric" description of entanglement dynamics and demonstrate how it explains the well-known phenomena of sudden death and revival of multipartite entanglements. For a statistical weight loss of the essentially entangled component with time, its average entanglement content is not affected by the coherence loss.

  5. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees

    PubMed Central

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-01-01

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597

  6. Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm

    NASA Technical Reports Server (NTRS)

    Kato, Hiromasa; Tannehill, John C.; Mehta, Unmeel B.

    2003-01-01

    A new parabolized Navier-Stokes (PNS) algorithm has been developed to efficiently compute magnetohydrodynamic (MHD) flows in the low magnetic Reynolds number regime. In this regime, the electrical conductivity is low and the induced magnetic field is negligible compared to the applied magnetic field. The MHD effects are modeled by introducing source terms into the PNS equation which can then be solved in a very efficient manner. To account for upstream (elliptic) effects, the flowfields are computed using multiple streamwise sweeps with an iterated PNS algorithm. Turbulence has been included by modifying the Baldwin-Lomax turbulence model to account for MHD effects. The new algorithm has been used to compute both laminar and turbulent, supersonic, MHD flows over flat plates and supersonic viscous flows in a rectangular MHD accelerator. The present results are in excellent agreement with previous complete Navier-Stokes calculations.

  7. Novel materials, fabrication techniques and algorithms for microwave and THz components, systems and applications

    NASA Astrophysics Data System (ADS)

    Liang, Min

    This dissertation presents the investigation of several additive manufactured components in RF and THz frequency, as well as the applications of gradient index lens based direction of arrival (DOA) estimation system and broadband electronically beam scanning system. Also, a polymer matrix composite method to achieve artificially controlled effective dielectric properties for 3D printing material is studied. Moreover, the characterization of carbon based nano-materials at microwave and THz frequency, photoconductive antenna array based Terahertz time-domain spectroscopy (THz-TDS) near field imaging system, and a compressive sensing based microwave imaging system is discussed in this dissertation. First, the design, fabrication and characterization of several 3D printed components in microwave and THz frequency are presented. These components include 3D printed broadband Luneburg lens, 3D printed patch antenna, 3D printed multilayer microstrip line structure with vertical transition, THz all-dielectric EMXT waveguide to planar microstrip transition structure and 3D printed dielectric reflectarrays. Second, the additive manufactured 3D Luneburg Lens is employed for DOA estimation application. Using the special property of a Luneburg lens that every point on the surface of the Lens is the focal point of a plane wave incident from the opposite side, 36 detectors are mounted around the surface of the lens to estimate the direction of arrival (DOA) of a microwave signal. The direction finding results using a correlation algorithm show that the averaged error is smaller than 1º for all 360 degree incident angles. Third, a novel broadband electronic scanning system based on Luneburg lens phased array structure is reported. The radiation elements of the phased array are mounted around the surface of a Luneburg lens. By controlling the phase and amplitude of only a few adjacent elements, electronic beam scanning with various radiation patterns can be easily achieved

  8. Structure of the Gabor matrix and efficient numerical algorithms for discrete Gabor expansions

    NASA Astrophysics Data System (ADS)

    Qiu, Sigang; Feichtinger, Hans G.

    1994-09-01

    The standard way to obtain suitable coefficients for the (non-orthogonal) Gabor expansion of a general signal for a given Gabor atom g and a pair of lattice constants in the (discrete) time/frequency plane, requires to compute the dual Gabor window function g- first. In this paper, we present an explicit description of the sparsity, the block and banded structure of the Gabor frame matrix G. On this basis efficient algorithms are developed for computing g- by solving the linear equation g- * G equals g with the conjugate- gradients method. Using the dual Gabor wavelet, a fast Gabor reconstruction algorithm with very low computational complexity is proposed.

  9. A numerical algorithm suggested by problems of transport in periodic media - The matrix case.

    NASA Technical Reports Server (NTRS)

    Allen, R. C., Jr.; Burgmeier, J. W.; Mundorff, P.; Wing, G. M.

    1972-01-01

    Extension of Allen and Wing's (1970) previous work on problems of transport in periodic media to the matrix case. A method in the form of a complete set of equations is presented that may be used without any further analytical work by investigators interested in computing solutions to problems of the type the method is designed to handle. All the formulas have been checked out numerically, and their effectiveness is demonstrated by numerical examples.

  10. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    NASA Astrophysics Data System (ADS)

    Bor, E.; Turduev, M.; Kurt, H.

    2016-08-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

  11. Fundamental form of the electrostatic δf-PIC algorithm and discovery of a converged numerical instability

    NASA Astrophysics Data System (ADS)

    Wilkie, George J.; Dorland, William

    2016-05-01

    The δf particle-in-cell algorithm has been a useful tool in studying the physics of plasmas, particularly turbulent magnetized plasmas in the context of gyrokinetics. The reduction in noise due to not having to resolve the full distribution function indicates an efficiency advantage over the standard ("full-f") particle-in-cell. Despite its successes, the algorithm behaves strangely in some circumstances. In this work, we document a fully resolved numerical instability that occurs in the simplest of multiple-species test cases: the electrostatic ΩH mode. There is also a poorly understood numerical instability that occurs when one is under-resolved in particle number, which may require a prohibitively large number of particles to stabilize. Both of these are independent of the time-stepping scheme, and we conclude that they exist if the time advancement were exact. The exact analytic form of the algorithm is presented, and several schemes for mitigating these instabilities are also presented.

  12. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light.

    PubMed

    Bor, E; Turduev, M; Kurt, H

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

  13. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    PubMed Central

    Bor, E.; Turduev, M.; Kurt, H.

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

  14. Global Observations of SO2 and HCHO Using an Innovative Algorithm based on Principal Component Analysis of Satellite Radiance Data

    NASA Astrophysics Data System (ADS)

    Li, Can; Joiner, Joanna; Krotkov, Nickolay; Fioletov, Vitali; McLinden, Chris

    2015-04-01

    We report on the latest progress in the development and application of a new trace gas retrieval algorithm for spaceborne UV-VIS spectrometers. Developed at NASA Goddard Space Flight Center, this algorithm utilizes the principal component analysis (PCA) technique to extract a series of spectral features (principal components or PCs) explaining the variance of measured reflectance spectra. For a species of interests that has no or very small background signals such as SO2 or HCHO, the leading PCs (that explain the most variance) obtained over the clean areas are generally associated with various physical processes (e.g., ozone absorption, rotational Raman scattering) and measurement details (e.g., wavelength shift) other than the signals of interests. By fitting these PCs and pre-computed Jacobians for the target species to a measured radiance spectrum, we can then estimate its atmospheric loading. The PCA algorithm has been operationally implemented to produce the new generation NASA Aura/OMI standard planetary boundary layer (PBL) SO2 product. Comparison with the previous OMI PBL SO2 product indicates that the PCA algorithm reduces the retrieval noise by a factor of two and greatly improves the data quality, allowing detection of smaller point SO2 pollution sources that have not been previously measured from space. We have also demonstrated the algorithm for SO2 retrievals using the new NASA/NOAA S-NPP/OMPS UV spectrometer. For HCHO, the new algorithm shows great promise as evidenced by results obtained from both OMI and OMPS. Finally, we discuss the most recent progress in the algorithm development, including the implementation of a new Jacobians lookup table to more appropriately account for the sensitivity of satellite sensors to various measurement conditions (e.g., viewing geometry, surface reflectance and cloudiness).

  15. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  16. A New Model for Redundancy Allocation Problem in Series Systems with Repairable Components by Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Sharifi, Mani; Rezaei Moayed, Reza; Haratizadeh, Sara

    2011-09-01

    This paper presents two models for redundancy allocation problem (RAP) with cold standby redundancy policy subject to weight and cost constraints. Also, each element of the system can be damaged exponentially. And, damaged elements can be repaired exponentially by hiring some repairmen. The problem is to determine: (1) element type used in the system, (2) number of elements, and (3) number of repairmen. As the models are not solvable by exact solution methods in reasonable CPU time, an efficient genetic algorithm is developed for it. The genetic algorithm (GA) is hybridized with a local search procedure. Also, the algorithm accepts infeasible solutions after penalizing them based on their amounts of infeasibilities. Thereby, by using these two features, an efficient genetic algorithm is obtained.

  17. Finite-element algorithm for radiative transfer in vertically inhomogeneous media: numerical scheme and applications.

    PubMed

    Kisselev, V B; Roberti, L; Perona, G

    1995-12-20

    The recently developed finite-element method for solution of the radiative transfer equation has been extended to compute the full azimuthal dependence of the radiance in a vertically inhomogeneous plane-parallel medium. The physical processes that are included in the algorithm are multiple scattering and bottom boundary bidirectional reflectivity. The incident radiation is a parallel flux on the top boundary that is characteristic for illumination of the atmosphere by the Sun in the UV, visible, and near-infrared regions of the electromagnetic spectrum. The theoretical basis is presented together with a number of applications to realistic atmospheres. The method is shown to be accurate even with a low number of grid points for most of the considered situations. The FORTRAN code for this algorithm is developed and is available for applications. PMID:21068966

  18. Finite-element algorithm for radiative transfer in vertically inhomogeneous media: numerical scheme and applications

    NASA Astrophysics Data System (ADS)

    Kisselev, Viatcheslav B.; Roberti, Laura; Perona, Giovanni

    1995-12-01

    The recently developed finite-element method for solution of the radiative transfer equation has been extended to compute the full azimuthal dependence of the radiance in a vertically inhomogeneous plane-parallel medium. The physical processes that are included in the algorithm are multiple scattering and bottom boundary bidirectional reflectivity. The incident radiation is a parallel flux on the top boundary that is characteristic for illumination of the atmosphere by the Sun in the UV, visible, and near-infrared regions of the electromagnetic spectrum. The theoretical basis is presented together with a number of applications to realistic atmospheres. The method is shown to be accurate even with a low number of grid points for most of the considered situations. The fortran code for this algorithm is developed and is available for applications.

  19. Parallel technology for numerical modeling of fluid dynamics problems by high-accuracy algorithms

    NASA Astrophysics Data System (ADS)

    Gorobets, A. V.

    2015-04-01

    A parallel computation technology for modeling fluid dynamics problems by finite-volume and finite-difference methods of high accuracy is presented. The development of an algorithm, the design of a software implementation, and the creation of parallel programs for computations on large-scale computing systems are considered. The presented parallel technology is based on a multilevel parallel model combining various types of parallelism: with shared and distributed memory and with multiple and single instruction streams to multiple data flows.

  20. Numerical solutions of the reaction diffusion system by using exponential cubic B-spline collocation algorithms

    NASA Astrophysics Data System (ADS)

    Ersoy, Ozlem; Dag, Idris

    2015-12-01

    The solutions of the reaction-diffusion system are given by method of collocation based on the exponential B-splines. Thus the reaction-diffusion systemturns into an iterative banded algebraic matrix equation. Solution of the matrix equation is carried out byway of Thomas algorithm. The present methods test on both linear and nonlinear problems. The results are documented to compare with some earlier studies by use of L∞ and relative error norm for problems respectively.

  1. Scanning of wind turbine upwind conditions: numerical algorithm and first applications

    NASA Astrophysics Data System (ADS)

    Calaf, Marc; Cortina, Gerard; Sharma, Varun; Parlange, Marc B.

    2014-11-01

    Wind turbines still obtain in-situ meteorological information by means of traditional wind vane and cup anemometers installed at the turbine's nacelle, right behind the blades. This has two important drawbacks: 1-turbine misalignment with the mean wind direction is common and energy losses are experienced; 2-the near-blade monitoring does not provide any time to readjust the profile of the wind turbine to incoming turbulence gusts. A solution is to install wind Lidar devices on the turbine's nacelle. This technique is currently under development as an alternative to traditional in-situ wind anemometry because it can measure the wind vector at substantial distances upwind. However, at what upwind distance should they interrogate the atmosphere? A new flexible wind turbine algorithm for large eddy simulations of wind farms that allows answering this question, will be presented. The new wind turbine algorithm timely corrects the turbines' yaw misalignment with the changing wind. The upwind scanning flexibility of the algorithm also allows to track the wind vector and turbulent kinetic energy as they approach the wind turbine's rotor blades. Results will illustrate the spatiotemporal evolution of the wind vector and the turbulent kinetic energy as the incoming flow approaches the wind turbine under different atmospheric stability conditions. Results will also show that the available atmospheric wind power is larger during daytime periods at the cost of an increased variance.

  2. A pseudo-spectral algorithm and test cases for the numerical solution of the two-dimensional rotating Green-Naghdi shallow water equations

    NASA Astrophysics Data System (ADS)

    Pearce, J. D.; Esler, J. G.

    2010-10-01

    A pseudo-spectral algorithm is presented for the solution of the rotating Green-Naghdi shallow water equations in two spatial dimensions. The equations are first written in vorticity-divergence form, in order to exploit the fact that time-derivatives then appear implicitly in the divergence equation only. A nonlinear equation must then be solved at each time-step in order to determine the divergence tendency. The nonlinear equation is solved by means of a simultaneous iteration in spectral space to determine each Fourier component. The key to the rapid convergence of the iteration is the use of a good initial guess for the divergence tendency, which is obtained from polynomial extrapolation of the solution obtained at previous time-levels. The algorithm is therefore best suited to be used with a standard multi-step time-stepping scheme (e.g. leap-frog). Two test cases are presented to validate the algorithm for initial value problems on a square periodic domain. The first test is to verify cnoidal wave speeds in one-dimension against analytical results. The second test is to ensure that the Miles-Salmon potential vorticity is advected as a parcel-wise conserved tracer throughout the nonlinear evolution of a perturbed jet subject to shear instability. The algorithm is demonstrated to perform well in each test. The resulting numerical model is expected to be of use in identifying paradigmatic behavior in mesoscale flows in the atmosphere and ocean in which both vortical, nonlinear and dispersive effects are important.

  3. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  4. A component-level failure detection and identification algorithm based on open-loop and closed-loop state estimators

    NASA Astrophysics Data System (ADS)

    You, Seung-Han; Cho, Young Man; Hahn, Jin-Oh

    2013-04-01

    This study presents a component-level failure detection and identification (FDI) algorithm for a cascade mechanical system subsuming a plant driven by an actuator unit. The novelty of the FDI algorithm presented in this study is that it is able to discriminate failure occurring in the actuator unit, the sensor measuring the output of the actuator unit, and the plant driven by the actuator unit. The proposed FDI algorithm exploits the measurement of the actuator unit output together with its estimates generated by open-loop (OL) and closed-loop (CL) estimators to enable FDI at the component's level. In this study, the OL estimator is designed based on the system identification of the actuator unit. The CL estimator, which is guaranteed to be stable against variations in the plant, is synthesized based on the dynamics of the entire cascade system. The viability of the proposed algorithm is demonstrated using a hardware-in-the-loop simulation (HILS), which shows that it can detect and identify target failures reliably in the presence of plant uncertainties.

  5. On the modeling of equilibrium twin interfaces in a single-crystalline magnetic shape memory alloy sample. II: numerical algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Jiong; Steinmann, Paul

    2016-05-01

    This is part II of this series of papers. The aim of the current paper was to solve the governing PDE system derived in part I numerically, such that the procedure of variant reorientation in a magnetic shape memory alloy (MSMA) sample can be simulated. The sample to be considered in this paper has a 3D cuboid shape and is subject to typical magnetic and mechanical loading conditions. To investigate the demagnetization effect on the sample's response, the surrounding space of the sample is taken into account. By considering the different properties of the independent variables, an iterative numerical algorithm is proposed to solve the governing system. The related mathematical formulas and some techniques facilitating the numerical calculations are introduced. Based on the results of numerical simulations, the distributions of some important physical quantities (e.g., magnetization, demagnetization field, and mechanical stress) in the sample can be determined. Furthermore, the properties of configurational force on the twin interfaces are investigated. By virtue of the twin interface movement criteria derived in part I, the whole procedure of magnetic field- or stress-induced variant reorientations in the MSMA sample can be properly simulated.

  6. Numerical model a graphene component for the sensing of weak electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Nasswettrova, A.; Fiala, P.; Nešpor, D.; Drexler, P.; Steinbauer, M.

    2015-05-01

    The paper discusses a numerical model and provides an analysis of a graphene coaxial line suitable for sub-micron sensors of magnetic fields. In relation to the presented concept, the target areas and disciplines include biology, medicine, prosthetics, and microscopic solutions for modern actuators or SMART elements. The proposed numerical model is based on an analysis of a periodic structure with high repeatability, and it exploits a graphene polymer having a basic dimension in nanometers. The model simulates the actual random motion in the structure as the source of spurious signals and considers the pulse propagation along the structure; furthermore, the model also examines whether and how the pulse will be distorted at the beginning of the line, given the various ending versions. The results of the analysis are necessary for further use of the designed sensing devices based on graphene structures.

  7. Numerical Simulation of Sintering Process in Ceramic Powder Injection Moulded Components

    SciTech Connect

    Song, J.; Barriere, T.; Gelin, J. C.

    2007-05-17

    A phenomenological model based on viscoplastic constitutive law is presented to describe the sintering process of ceramic components obtained by powder injection moulding. The parameters entering in the model are identified through sintering experiments in dilatometer with the proposed optimization method. The finite element simulations are carried out to predict the density variations and dimensional changes of the components during sintering. A simulation example on the sintering process of hip implant in alumina has been conducted. The simulation results have been compared with the experimental ones. A good agreement is obtained.

  8. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.

    2006-02-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.

  9. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  10. An Effective Hybrid Firefly Algorithm with Harmony Search for Global Numerical Optimization

    PubMed Central

    Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan

    2013-01-01

    A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods. PMID:24348137

  11. On substructuring algorithms and solution techniques for the numerical approximation of partial differential equations

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.; Nicolaides, R. A.

    1986-01-01

    Substructuring methods are in common use in mechanics problems where typically the associated linear systems of algebraic equations are positive definite. Here these methods are extended to problems which lead to nonpositive definite, nonsymmetric matrices. The extension is based on an algorithm which carries out the block Gauss elimination procedure without the need for interchanges even when a pivot matrix is singular. Examples are provided wherein the method is used in connection with finite element solutions of the stationary Stokes equations and the Helmholtz equation, and dual methods for second-order elliptic equations.

  12. Coupled of thermal-mechanical-transformation numerical simulation on hot stamping with static explicit algorithm

    NASA Astrophysics Data System (ADS)

    Hu, P.; Shi, D. Y.; Ying, L.; Shen, G. Z.; Chang, Y.; Liu, W. Q.

    2013-05-01

    Thermal-mechanical-transformation coupled theoretical model for hot stamping and rheological behavior of high strength steel at elevated temperatures were obtained through non-isothermal and isothermal tensile tests respectively in this work. The static explicit finite element equations for hot stamping were proposed based on thermal-mechanical-transformation coupled constitutive laws and nonlinear, large deformation analysis. According to these equations, the hot stamping module of KMAS (King Mesh Analysis System) was developed for the numerical simulation of sheet metal forming at elevated temperatures. Afterwards, the hot stamping simulation of a typical B-pillar conducted by the KMAS software was compared to the experiment. The comparison consists of the following sides: temperature distribution, thickness distribution and martensite fraction. The good agreement between numerical simulation and the experiment confirms that the multi-field coupled constitutive laws and the KMAS software can predict hot stamping process accurately.

  13. Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media.

    PubMed

    Stamnes, K; Tsay, S C; Wiscombe, W; Jayaweera, K

    1988-06-15

    We summarize an advanced, thoroughly documented, and quite general purpose discrete ordinate algorithm for time-independent transfer calculations in vertically inhomogeneous, nonisothermal, plane-parallel media. Atmospheric applications ranging from the UV to the radar region of the electromagnetic spectrum are possible. The physical processes included are thermal emission, scattering, absorption, and bidirectional reflection and emission at the lower boundary. The medium may be forced at the top boundary by parallel or diffuse radiation and by internal and boundary thermal sources as well. We provide a brief account of the theoretical basis as well as a discussion of the numerical implementation of the theory. The recent advances made by ourselves and our collaborators-advances in both formulation and numerical solution-are all incorporated in the algorithm. Prominent among these advances are the complete conquest of two illconditioning problems which afflicted all previous discrete ordinate implementations: (1) the computation of eigenvalues and eigenvectors and (2) the inversion of the matrix determining the constants of integration. Copies of the FORTRAN program on microcomputer diskettes are available for interested users. PMID:20531783

  14. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  15. FAST TRACK COMMUNICATION Multi-component generalizations of the CH equation: geometrical aspects, peakons and numerical examples

    NASA Astrophysics Data System (ADS)

    Holm, D. D.; Ivanov, R. I.

    2010-12-01

    The Lax pair formulation of the two-component Camassa-Holm equation (CH2) is generalized to produce an integrable multi-component family, CH(n, k), of equations with n components and 1 <= |k| <= n velocities. All of the members of the CH(n, k) family show fluid-dynamics properties with coherent solitons following particle characteristics. We determine their Lie-Poisson Hamiltonian structures and give numerical examples of their soliton solution behaviour. We concentrate on the CH(2, k) family with one or two velocities, including the CH(2, -1) equation in the Dym position of the CH2 hierarchy. A brief discussion of the CH(3, 1) system reveals the underlying graded Lie-algebraic structure of the Hamiltonian formulation for CH(n, k) when n >= 3. Fondly recalling our late friend Jerry Marsden.

  16. Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer - I. Algorithms and numerical methods

    NASA Astrophysics Data System (ADS)

    Harries, Tim J.

    2015-04-01

    We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelization method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion on to, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms.

  17. Transient Numerical Modeling of the Combustion of Bi-Component Liquid Droplets: Methanol/Water Mixture

    NASA Technical Reports Server (NTRS)

    Marchese, A. J.; Dryer, F. L.

    1994-01-01

    This study shows that liquid mixtures of methanol and water are attractive candidates for microgravity droplet combustion experiments and associated numerical modeling. The gas phase chemistry for these droplet mixtures is conceptually simple, well understood and substantially validated. In addition, the thermodynamic and transport properties of the liquid mixture have also been well characterized. Furthermore, the results obtained in this study predict that the extinction of these droplets may be observable in ground-based drop to tower experiments. Such experiments will be conducted shortly followed by space-based experiments utilizing the NASA FSDC and DCE experiments.

  18. Numerical Modeling for Hole-Edge Cracking of Advanced High-Strength Steels (AHSS) Components in the Static Bend Test

    NASA Astrophysics Data System (ADS)

    Kim, Hyunok; Mohr, William; Yang, Yu-Ping; Zelenak, Paul; Kimchi, Menachem

    2011-08-01

    Numerical modeling of local formability, such as hole-edge cracking and shear fracture in bending of AHSS, is one of the challenging issues for simulation engineers for prediction and evaluation of stamping and crash performance of materials. This is because continuum-mechanics-based finite element method (FEM) modeling requires additional input data, "failure criteria" to predict the local formability limit of materials, in addition to the material flow stress data input for simulation. This paper presents a numerical modeling approach for predicting hole-edge failures during static bend tests of AHSS structures. A local-strain-based failure criterion and a stress-triaxiality-based failure criterion were developed and implemented in LS-DYNA simulation code to predict hole-edge failures in component bend tests. The holes were prepared using two different methods: mechanical punching and water-jet cutting. In the component bend tests, the water-jet trimmed hole showed delayed fracture at the hole-edges, while the mechanical punched hole showed early fracture as the bending angle increased. In comparing the numerical modeling and test results, the load-displacement curve, the displacement at the onset of cracking, and the final crack shape/length were used. Both failure criteria also enable the numerical model to differentiate between the local formability limit of mechanical-punched and water-jet-trimmed holes. The failure criteria and static bend test developed here are useful to evaluate the local formability limit at a structural component level for automotive crash tests.

  19. Numerical Predictions on the Final Properties of Metal Injection Moulded Components after Sintering Process

    SciTech Connect

    Song, J.; Barriere, T.; Gelin, J. C.

    2007-04-07

    A macroscopic model based on a viscoplastic constitutive law is presented to describe the sintering process of metallic powder components obtained by injection moulding. The model parameters are identified by the gravitational beam-bending tests in sintering and the sintering experiments in dilatometer. The finite element simulations are carried out to predict the shrinkage, density and strength after sintering. The simulation results have been compared to the experimental ones, and a good agreement has been obtained.

  20. Impact of multi-component diffusion in turbulent combustion using direct numerical simulations

    SciTech Connect

    Bruno, Claudio; Sankaran, Vaidyanathan; Kolla, Hemanth; Chen, Jacqueline H.

    2015-08-28

    This study presents the results of DNS of a partially premixed turbulent syngas/air flame at atmospheric pressure. The objective was to assess the importance and possible effects of molecular transport on flame behavior and structure. To this purpose DNS were performed at with two proprietary DNS codes and with three different molecular diffusion transport models: fully multi-component, mixture averaged, and imposing the Lewis number of all species to be unity.

  1. Numerical modeling evapotranspiration flux components in shrub-encroached grassland in Inner Mongolia, China

    NASA Astrophysics Data System (ADS)

    Wang, Pei; Li, Xiao-Yan; Huang, Jie-Yu; Yang, Wen-Xin; Wang, Qi-Dan; Xu, Kun; Zheng, Xiao-Ran

    2016-04-01

    Shrub encroachment into arid grasslands occurs around the world. However, few works on shrub encroachment has been conducted in China. Moreover, its hydrological implications remain poorly investigated in arid and semiarid regions. This study combined a two-source energy balanced model and Newton-Raphson iteration scheme to simulate the evapotranspiration (ET) and their components of shrub-encroached(with 15.4% shrub coverage) grassland in Inner Mongolia. Good agreements of ET flux between modelled and measured by Bowen ratio method with relatively insensitive to uncertainties/errors in the assigned models parameters or in measured input variables for its components illustrated that our model was feasible for simulating evapotranspiration flux components in shrub-encroached grassland. The transpiration fraction(T /ET)account for 58±17%during the growing season. With the designed shrub encroachment extreme scenarios (maximum and minimum coverage),the contribution of shrub to local plant transpiration (Tshrub/T) was 20.06±7%during the growing season. Canopy conductance was the main controlling factor of T /ET. In diurnal scale short wave solar radiation was the direct influential factor while in seasonal scale leaf area index (LAI) and soil water content were the direct influential factors. We find that the seasonal variation of Tshrub/T has a good relationship with ratio of LAIshrub/LAI, and rainfall characteristics widened the difference of contribution of shrub and herbs to ecosystem evapotranspiration.

  2. Implementation and testing of a real-time 3-component phase picking program for Earthworm using the CECM algorithm

    NASA Astrophysics Data System (ADS)

    Baker, B. I.; Friberg, P. A.

    2014-12-01

    Modern seismic networks typically deploy three component (3C) sensors, but still fail to utilize all of the information available in the seismograms when performing automated phase picking for real-time event location. In most cases a variation on a short term over long term average threshold detector is used for picking and then an association program is used to assign phase types to the picks. However, the 3C waveforms from an earthquake contain an abundance of information related to the P and S phases in both their polarization and energy partitioning. An approach that has been overlooked and has demonstrated encouraging results is the Component Energy Comparison Method (CECM) by Nagano et al. as published in Geophysics 1989. CECM is well suited to being used in real-time because the calculation is not computationally intensive. Furthermore, the CECM method has fewer tuning variables (3) than traditional pickers in Earthworm such as the Rex Allen algorithm (N=18) or even the Anthony Lomax Filter Picker module (N=5). In addition to computing the CECM detector we study the detector sensitivity by rotating the signal into principle components as well as estimating the P phase onset from a curvature function describing the CECM as opposed to the CECM itself. We present our results implementing this algorithm in a real-time module for Earthworm and show the improved phase picks as compared to the traditional single component pickers using Earthworm.

  3. Numerical solution of the Richards equation based catchment runoff model with dd-adaptivity algorithm

    NASA Astrophysics Data System (ADS)

    Kuraz, Michal

    2016-06-01

    This paper presents pseudo-deterministic catchment runoff model based on the Richards equation model [1] - the governing equation for the subsurface flow. The subsurface flow in a catchment is described here by two-dimensional variably saturated flow (unsaturated and saturated). The governing equation is the Richards equation with a slight modification of the time derivative term as considered e.g. by Neuman [2]. The nonlinear nature of this problem appears in unsaturated zone only, however the delineation of the saturated zone boundary is a nonlinear computationally expensive issue. The simple one-dimensional Boussinesq equation was used here as a rough estimator of the saturated zone boundary. With this estimate the dd-adaptivity algorithm (see Kuraz et al. [4, 5, 6]) could always start with an optimal subdomain split, so it is now possible to avoid solutions of huge systems of linear equations in the initial iteration level of our Richards equation based runoff model.

  4. An efficient algorithm for numerical computations of continuous densities of states

    NASA Astrophysics Data System (ADS)

    Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.

    2016-06-01

    In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed

  5. A fast algorithm for Direct Numerical Simulation of natural convection flows in arbitrarily-shaped periodic domains

    NASA Astrophysics Data System (ADS)

    Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.

    2015-11-01

    A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.

  6. Survey and Implementation on DSP of Algorithms of Robot Paths Generation and of Numeric Control for Mobile Robot

    NASA Astrophysics Data System (ADS)

    Bouallegue, Kais; Chaari, Abdessattar

    In this study, one propose to study a numeric type strategy permitting the generation of any shape of path in view of the scheduling of the trajectories for a car-like mobile robot where the planned motions considered are continuous sequences in the space of the robot. These paths are programmed in order to have some types of closed or open trajectories. One is interested in the motion control of the robot from an initial position to a final position while optimizing the consumed energy in its alternated circular motion on both sides of the segment joining these two points. In this study, one presents a new method based on a numeric approach conceived from the kinematics equations of the robot. This new technique of numeric, adaptive and dynamic control of the robot is implemented on DSP21065L of the SHARC family. This algorithm assures the robot control of an initial position of departure to a final position of arrival without the existence of obstacles.

  7. A critical evaluation of numerical algorithms and flow physics in complex supersonic flows

    NASA Astrophysics Data System (ADS)

    Aradag, Selin

    In this research, two different complex supersonic flows are selected to apply CFD to Navier-Stokes simulations. First test case is "Supersonic Flow over an Open Rectangular Cavity". Open cavity flow fields are remarkably complicated with internal and external regions that are coupled via self-sustained shear layer oscillations. Supersonic flow past a cavity has numerous applications in store carriage and release. Internal carriage of stores, which can be modeled using a cavity configuration, is used for supersonic aircraft in order to reduce radar cross section, aerodynamic drag and aerodynamic heating. Supersonic, turbulent, three-dimensional unsteady flow past an open rectangular cavity is simulated, to understand the physics and three-dimensional nature of the cavity flow oscillations. Influences of numerical parameters such as numerical flux scheme, computation time and flux limiter on the computed flow are determined. Two dimensional simulations are also performed for comparison purposes. The next test case is "The Computational Design of Boeing/AFOSR Mach 6 Wind Tunnel". Due to huge differences between geometrical scales, this problem is both challenging and computationally intensive. It is believed that most of the experimental data obtained from conventional ground testing facilities are not reliable due to high levels of noise associated with the acoustic fluctuations from the turbulent boundary layers on the wind tunnel walls. Therefore, it is very important to have quiet testing facilities for hypersonic flow research. The Boeing/AFOSR Mach 6 Wind tunnel in Purdue University has been designed as a quiet tunnel for which the noise level is an order of magnitude lower than that in conventional wind tunnels. However, quiet flow is achieved in the Purdue Mach 6 tunnel for only low Reynolds numbers. Early transition of the nozzle wall boundary layer has been identified as the cause of the test section noise. Separation bubbles on the bleed lip and associated

  8. New Design Methods and Algorithms for Multi-component Distillation Processes

    SciTech Connect

    2009-02-01

    This factsheet describes a research project whose main goal is to develop methods and software tools for the identification and analysis of optimal multi-component distillation configurations for reduced energy consumption in industrial processes.

  9. A Nested Genetic Algorithm for the Numerical Solution of Non-Linear Coupled Equations in Water Quality Modeling

    NASA Astrophysics Data System (ADS)

    García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson

    2010-05-01

    Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in

  10. All-electron formalism for total energy strain derivatives and stress tensor components for numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Knuth, Franz; Carbogno, Christian; Atalla, Viktor; Blum, Volker; Scheffler, Matthias

    2015-05-01

    We derive and implement the strain derivatives of the total energy of solids, i.e., the analytic stress tensor components, in an all-electron, numeric atom-centered orbital based density-functional formalism. We account for contributions that arise in the semi-local approximation (LDA/GGA) as well as in the generalized Kohn-Sham case, in which a fraction of exact exchange (hybrid functionals) is included. In this work, we discuss the details of the implementation including the numerical corrections for sparse integrations grids which allow to produce accurate results. We validate the implementation for a variety of test cases by comparing to strain derivatives performed via finite differences. Additionally, we include the detailed definition of the overlapping atom-centered integration formalism used in this work to obtain total energies and their derivatives.

  11. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2016-04-01

    Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.

  12. Theory manual for FAROW version 1.1: A numerical analysis of the Fatigue And Reliability Of Wind turbine components

    SciTech Connect

    WUBTERSTEUBMSTEVEB R.; VEERS,PAUL S.

    2000-01-01

    Because the fatigue lifetime of wind turbine components depends on several factors that are highly variable, a numerical analysis tool called FAROW has been created to cast the problem of component fatigue life in a probabilistic framework. The probabilistic analysis is accomplished using methods of structural reliability (FORM/SORM). While the workings of the FAROW software package are defined in the user's manual, this theory manual outlines the mathematical basis. A deterministic solution for the time to failure is made possible by assuming analytical forms for the basic inputs of wind speed, stress response, and material resistance. Each parameter of the assumed forms for the inputs can be defined to be a random variable. The analytical framework is described and the solution for time to failure is derived.

  13. A Hybrid Color Space for Skin Detection Using Genetic Algorithm Heuristic Search and Principal Component Analysis Technique

    PubMed Central

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  14. A hybrid color space for skin detection using genetic algorithm heuristic search and principal component analysis technique.

    PubMed

    Maktabdar Oghaz, Mahdi; Maarof, Mohd Aizaini; Zainal, Anazida; Rohani, Mohd Foad; Yaghoubyan, S Hadi

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  15. Real-space, mean-field algorithm to numerically calculate long-range interactions

    NASA Astrophysics Data System (ADS)

    Cadilhe, A.; Costa, B. V.

    2016-02-01

    Long-range interactions are known to be of difficult treatment in statistical mechanics models. There are some approaches that introduce a cutoff in the interactions or make use of reaction field approaches. However, those treatments suffer the illness of being of limited use, in particular close to phase transitions. The use of open boundary conditions allows the sum of the long-range interactions over the entire system to be done, however, this approach demands a sum over all degrees of freedom in the system, which makes a numerical treatment prohibitive. Techniques like the Ewald summation or fast multipole expansion account for the exact interactions but are still limited to a few thousands of particles. In this paper we introduce a novel mean-field approach to treat long-range interactions. The method is based in the division of the system in cells. In the inner cell, that contains the particle in sight, the 'local' interactions are computed exactly, the 'far' contributions are then computed as the average over the particles inside a given cell with the particle in sight for each of the remaining cells. Using this approach, the large and small cells limits are exact. At a fixed cell size, the method also becomes exact in the limit of large lattices. We have applied the procedure to the two-dimensional anisotropic dipolar Heisenberg model. A detailed comparison between our method, the exact calculation and the cutoff radius approximation were done. Our results show that the cutoff-cell approach outperforms any cutoff radius approach as it maintains the long-range memory present in these interactions, contrary to the cutoff radius approximation. Besides that, we calculated the critical temperature and the critical behavior of the specific heat of the anisotropic Heisenberg model using our method. The results are in excellent agreement with extensive Monte Carlo simulations using Ewald summation.

  16. Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, numerics and applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George Em

    2014-11-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been

  17. Numerical analysis of a smart composite material mechanical component using an embedded long period grating fiber sensor

    NASA Astrophysics Data System (ADS)

    Savastru, Dan; Miclos, Sorin; Savastru, Roxana; Lancranjan, Ion I.

    2015-05-01

    Results obtained by FEM analysis of a smart mechanical part manufactured of reinforced composite materials with embedded long period grating fiber sensors (LPGFS) used for operation monitoring are presented. Fiber smart reinforced composite materials because of their fundamental importance across a broad range of industrial applications, as aerospace industry. The main purpose of the performed numerical analysis consists in final improved design of composite mechanical components providing a feedback useful for further automation of the whole system. The performed numerical analysis is pointing to a correlation of composite material internal mechanical loads applied to LPGFS with the NIR absorption bands peak wavelength shifts. One main idea of the performed numerical analysis relies on the observed fact that a LPGFS embedded inside a composite material undergoes mechanical loads created by the micro scale roughness of the composite fiber network. The effect of this mechanical load consists in bending of the LPGFS. The shifting towards IR and broadening of absorption bands appeared in the LPGFS transmission spectra is modeled according to this observation using the coupled mode approach.

  18. Properties of and Algorithms for Fitting Three-Way Component Models with Offset Terms

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.

    2006-01-01

    Prior to a three-way component analysis of a three-way data set, it is customary to preprocess the data by centering and/or rescaling them. Harshman and Lundy (1984) considered that three-way data actually consist of a three-way model part, which in fact pertains to ratio scale measurements, as well as additive "offset" terms that turn the ratio…

  19. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost

  20. Performance comparison of six independent components analysis algorithms for fetal signal extraction from real fMCG data

    NASA Astrophysics Data System (ADS)

    Hild, Kenneth E.; Alleva, Giovanna; Nagarajan, Srikantan; Comani, Silvia

    2007-01-01

    In this study we compare the performance of six independent components analysis (ICA) algorithms on 16 real fetal magnetocardiographic (fMCG) datasets for the application of extracting the fetal cardiac signal. We also compare the extraction results for real data with the results previously obtained for synthetic data. The six ICA algorithms are FastICA, CubICA, JADE, Infomax, MRMI-SIG and TDSEP. The results obtained using real fMCG data indicate that the FastICA method consistently outperforms the others in regard to separation quality and that the performance of an ICA method that uses temporal information suffers in the presence of noise. These two results confirm the previous results obtained using synthetic fMCG data. There were also two notable differences between the studies based on real and synthetic data. The differences are that all six ICA algorithms are independent of gestational age and sensor dimensionality for synthetic data, but depend on gestational age and sensor dimensionality for real data. It is possible to explain these differences by assuming that the number of point sources needed to completely explain the data is larger than the dimensionality used in the ICA extraction.

  1. A single frequency component-based re-estimated MUSIC algorithm for impact localization on complex composite structures

    NASA Astrophysics Data System (ADS)

    Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng

    2015-10-01

    The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy.

  2. A component mode synthesis algorithm for multibody dynamics of wind turbines

    NASA Astrophysics Data System (ADS)

    Holm-Jørgensen, K.; Nielsen, S. R. K.

    2009-10-01

    A system reduction scheme related to a multibody formulation of wind turbine dynamics is devised. Each substructure is described in its own frame of reference, which is moving freely in the vicinity of the moving substructure, in principle without any constraints to the rigid body part of the motion of the substructure. The system reduction is based on a component mode synthesis method, where the response of the internal degrees of freedom of the substructure is described as the quasi-static response induced by the boundary degrees of freedom via the constraint modes superimposed in combination to a dynamic component induced by inertial effects and internal loads. The latter component is modelled by a truncated modal expansion in fixed interface undamped eigenmodes. The selected modal vector base for the internal dynamics ensures that the boundary degrees of freedom account for the rigid-body dynamics of the substructure, and explicitly represent the coupling degrees of freedom at the interface to the adjacent substructures. The method has been demonstrated for a blade structure, which has been modelled as two substructures. Two modelling methods have been examined where the first is by use of fixed-fixed eigenmodes for the innermost substructure and fixed-free eigenmodes for the outermost substructure. The other approach is by use of fixed-free eigenmodes for both substructures. The fixed-fixed method shows good correspondence with the full FE model which is not the case for the fixed-free method due to incompatible displacements and rotations at the interface between the two substructures. Moreover, the results from the reduced model by use of constant constraint modes and constant fixed interface modes over a large operating area for the wind turbine blade are almost identical to the full FE model.

  3. Three-Dimensional Finite Element Based Numerical Simulation of Machining of Thin-Wall Components with Varying Wall Constraints

    NASA Astrophysics Data System (ADS)

    Joshi, Shrikrishna Nandkishor; Bolar, Gururaj

    2016-06-01

    Control of part deflection and deformation during machining of low rigidity thin-wall components is an important aspect in the manufacture of desired quality products. This paper presents a comparative study on the effect of geometry constraints on the product quality during machining of thin-wall components made of an aerospace alloy aluminum 2024-T351. Three-dimensional nonlinear finite element (FE) based simulations of machining of thin-wall parts were carried out by considering three variations in the wall constraint viz. free wall, wall constrained at one end, and wall with constraints at both the ends. Lagrangian formulation based transient FE model has been developed to simulate the interaction between the workpiece and helical milling cutter. Johnson-Cook material and damage model were adopted to account for material behavior during machining process; damage initiation and chip separation. A modified Coulomb friction model was employed to define the contact between the cutting tool and the workpiece. The numerical model was validated with experimental results and found to be in good agreement. Based on the simulation results it was noted that deflection and deformation were maximum in the thin-wall constrained at one end in comparison with those obtained in other cases. It was noted that three dimensional finite element simulations help in a better way to predict the product quality during precision manufacturing of thin-wall components.

  4. Numerical predictions of the thermal behaviour and resultant effects of grouting cements while setting prosthetic components in bone.

    PubMed

    Quarini, G L; Learmonth, I D; Gheduzzi, S

    2006-07-01

    Acrylic cements are commonly used to attach prosthetic components in joint replacement surgery. The cements set in short periods of time by a complex polymerization of initially liquid monomer compounds into solid structures with accompanying significant heat release. Two main problems arise from this form of fixation: the first is the potential damage caused by the temperature excursion, and the second is incomplete reaction leaving active monomer compounds, which can potentially be slowly released into the patient. This paper presents a numerical model predicting the temperature-time history in an idealized prosthetic-cement-bone system. Using polymerization kinetics equations from the literature, the degree of polymerization is predicted, which is found to be very dependent on the thermal history of the setting process. Using medical literature, predictions for the degree of thermal bone necrosis are also made. The model is used to identify the critical parameters controlling thermal and unreacted monomer distributions. PMID:16898219

  5. γ-TEMPy: Simultaneous Fitting of Components in 3D-EM Maps of Their Assembly Using a Genetic Algorithm

    PubMed Central

    Pandurangan, Arun Prasad; Vasishtan, Daven; Alber, Frank; Topf, Maya

    2015-01-01

    Summary We have developed a genetic algorithm for building macromolecular complexes using only a 3D-electron microscopy density map and the atomic structures of the relevant components. For efficient sampling the method uses map feature points calculated by vector quantization. The fitness function combines a mutual information score that quantifies the goodness of fit with a penalty score that helps to avoid clashes between components. Testing the method on ten assemblies (containing 3–8 protein components) and simulated density maps at 10, 15, and 20 Å resolution resulted in identification of the correct topology in 90%, 70%, and 60% of the cases, respectively. We further tested it on four assemblies with experimental maps at 7.2–23.5 Å resolution, showing the ability of the method to identify the correct topology in all cases. We have also demonstrated the importance of the map feature-point quality on assembly fitting in the lack of additional experimental information. PMID:26655474

  6. Individual differences in the components of children's and adults' information processing for simple symbolic and non-symbolic numeric decisions.

    PubMed

    Thompson, Clarissa A; Ratcliff, Roger; McKoon, Gail

    2016-10-01

    How do speed and accuracy trade off, and what components of information processing develop as children and adults make simple numeric comparisons? Data from symbolic and non-symbolic number tasks were collected from 19 first graders (Mage=7.12 years), 26 second/third graders (Mage=8.20 years), 27 fourth/fifth graders (Mage=10.46 years), and 19 seventh/eighth graders (Mage=13.22 years). The non-symbolic task asked children to decide whether an array of asterisks had a larger or smaller number than 50, and the symbolic task asked whether a two-digit number was greater than or less than 50. We used a diffusion model analysis to estimate components of processing in tasks from accuracy, correct and error response times, and response time (RT) distributions. Participants who were accurate on one task were accurate on the other task, and participants who made fast decisions on one task made fast decisions on the other task. Older participants extracted a higher quality of information from the stimulus arrays, were more willing to make a decision, and were faster at encoding, transforming the stimulus representation, and executing their responses. Individual participants' accuracy and RTs were uncorrelated. Drift rate and boundary settings were significantly related across tasks, but they were unrelated to each other. Accuracy was mainly determined by drift rate, and RT was mainly determined by boundary separation. We concluded that RT and accuracy operate largely independently. PMID:27239983

  7. Diagnosing basal cell carcinoma in vivo by near-infrared Raman spectroscopy: a Principal Components Analysis discrimination algorithm

    NASA Astrophysics Data System (ADS)

    Silveira, Landulfo, Jr.; Silveira, Fabrício L.; Bodanese, Benito; Pacheco, Marcos Tadeu T.; Zângaro, Renato A.

    2012-02-01

    This work demonstrated the discrimination among basal cell carcinoma (BCC) and normal human skin in vivo using near-infrared Raman spectroscopy. Spectra were obtained in the suspected lesion prior resectional surgery. After tissue withdrawn, biopsy fragments were submitted to histopathology. Spectra were also obtained in the adjacent, clinically normal skin. Raman spectra were measured using a Raman spectrometer (830 nm) with a fiber Raman probe. By comparing the mean spectra of BCC with the normal skin, it has been found important differences in the 800-1000 cm-1 and 1250-1350 cm-1 (vibrations of C-C and amide III, respectively, from lipids and proteins). A discrimination algorithm based on Principal Components Analysis and Mahalanobis distance (PCA/MD) could discriminate the spectra of both tissues with high sensitivity and specificity.

  8. Computation of aircraft component flow fields at transonic Mach numbers using a three-dimensional Navier-Stokes algorithm

    NASA Technical Reports Server (NTRS)

    Shrewsbury, George D.; Vadyak, Joseph; Schuster, David M.; Smith, Marilyn J.

    1989-01-01

    A computer analysis was developed for calculating steady (or unsteady) three-dimensional aircraft component flow fields. This algorithm, called ENS3D, can compute the flow field for the following configurations: diffuser duct/thrust nozzle, isolated wing, isolated fuselage, wing/fuselage with or without integrated inlet and exhaust, nacelle/inlet, nacelle (fuselage) afterbody/exhaust jet, complete transport engine installation, and multicomponent configurations using zonal grid generation technique. Solutions can be obtained for subsonic, transonic, or hypersonic freestream speeds. The algorithm can solve either the Euler equations for inviscid flow, the thin shear layer Navier-Stokes equations for viscous flow, or the full Navier-Stokes equations for viscous flow. The flow field solution is determined on a body-fitted computational grid. A fully-implicit alternating direction implicit method is employed for the solution of the finite difference equations. For viscous computations, either a two layer eddy-viscosity turbulence model or the k-epsilon two equation transport model can be used to achieve mathematical closure.

  9. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm

    PubMed Central

    2014-01-01

    Background The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. Methods In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Results Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Conclusions Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective. PMID:25474487

  10. Design of a micro lapping system based on double-feedback control algorithm for manufacturing optical micro components

    NASA Astrophysics Data System (ADS)

    Che, Lin; Li, Guo; Wang, Bo; Ding, Fei; Mao, Xing; Dong, Wenxia

    2014-08-01

    This paper presents a micro lapping machine tool, which is dedicated for manufacturing the high-precision optical micro components with 3-D micro structures. And it can remove the damaged surface layer efficiently.In order to control machining process precisely, a double-feedback control system strategy is proposed and implemented. Lapping force signal from the clamp feeds back at the same time with position signal from grating scale close-looped devices. With the function of position keeping , a dual-stage drive micro-displacement servo system is used to provide the desired performance in the vertical feeding direction. Random lapping trace is formed with combinations of two mutually-perpendicular horizontal liner motion. A clamp with the function of micro force detection is designed to monitor the machining process and control the lapping force. Based on force feedback, a tool auto-checking strategy is conducted to realize the tool checking in limited tiny space. Corresponding experiments are undertaken to test the properties of the machine tool.And, the optical micro components are manufactured successfully. The optical components are measured and analysised before and after processing. The experimental results show that the position-keeping accuracy of the dual-stage feed drive system can reach to ±0.02μm, the resolution of motion control can reach to 20nm.The Sa value of the processed component can reach 0.0882um. Surface quality can be improved obviously and the damaged surface layer is removed efficiently.The theoretical and experimental results show the validity of the machine tool and the control algorithm.

  11. Geoscientific Model Development: A new EGU Journal for Descriptions of Numerical Models of the Earth System and its components

    NASA Astrophysics Data System (ADS)

    Rutt, I.; Lunt, D.; Hargreaves, J.; Annan, J.; Sander, R.

    2007-12-01

    Geoscientific Model Development (GMD), launching in January 2008, will be an international scientific journal dedicated to the publication and public discussion of the description, development and benchmarking of numerical models of the Earth System and its components. Manuscript types considered for peer-reviewed publication will be: model descriptions, model inter-comparisons, benchmarking papers, and technical papers. In encouraging full publication of Earth System Models we have two main goals. The primary goal is to promote the efficient and effective development of the models, through the clear presentation of the techniques from which all other developers can improve their own models. A secondary goal is to provide increased credibility to the Earth System Science field by creating a space within which models can be openly presented and critically discussed, and their results reproduced and validated. A welcome side-effect will be the formal, peer-reviewed, recognition of the work of Earth System Model developers. It is anticipated that model description papers will form the backbone of GMD. These will comprehensively describe the underlying science behind the models, and will also include details often omitted from more traditional papers, such as the numerical schemes employed. The papers should be somewhat more advanced than internal technical reports. For example, the inclusion of discussion of the scope of applicability and limitations of the approach adopted is expected. In order to enable full peer review of the models, evidence of model output should also be provided, with comparison to standard benchmarks, observations and/or other model output included as appropriate. The publication will potentially consist of three parts: the main paper, a user manual, and the source code (ideally supported by some summary outputs from test case simulations).

  12. Numerical modeling of Non-isothermal two-phase two-component flow process with phase change phenomena in the porous media

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Shao, H.; Thullner, M.; Kolditz, O.

    2014-12-01

    In applications of Deep Geothermal reservoirs, thermal recovery processes, and contaminated groundwater sites, the multiphase multicomponent flow and transport processes are often considered the most important underlying physical process. In particular, the behavior of phase appearance and disappearance is the critical to the performance of many geo-reservoirs, and great interests exit in the scientific community to simulate this coupled process. This work is devoted to the modeling and simulation of two-phase, two components flow and transport in the porous medium, whereas the phase change behavior in non-isothermal conditions is considered. In this work, we have implemented the algorithm developed by Marchand, et al., into the open source scientific software OpenGeoSys. The governing equation is formulated in terms of molar fraction of the light component and mean pressure as the persistent primary variables, which leads to a fully coupled nonlinear PDE system. One of the important advantages of this approach is avoiding the primary variables switching between single phase and two phase zones, so that this uniform system can be applied to describe the behavior of phase change. On the other hand, due to the number of unkown variables closure relationships are also formulated to close the whole equation system by using the approach of complementarity constrains. For the numerical technical scheme: The standard Galerkin Finite element method is applied for space discretization, while a fully implicit scheme for the time discretization, and Newton-Raphson method is utilized for the global linearization, as well as the closure relationship. This model is verified based on one test case developed to simulate the heat pipe problem. This benchmark involves two-phase two-component flow in saturated/unsaturated porous media under non-isothermal condition, including phase change and mineral-water geochemical reactive transport processes. The simulation results will be

  13. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  14. Direct Numerical Simulation of Acoustic Waves Interacting with a Shock Wave in a Quasi-1D Convergent-Divergent Nozzle Using an Unstructured Finite Volume Algorithm

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.; Mankbadi, Reda R.

    1995-01-01

    Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.

  15. REVIEW OF THE GOVERNING EQUATIONS, COMPUTATIONAL ALGORITHMS, AND OTHER COMPONENTS OF THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODELING SYSTEM

    EPA Science Inventory

    This article describes the governing equations, computational algorithms, and other components entering into the Community Multiscale Air Quality (CMAQ) modeling system. This system has been designed to approach air quality as a whole by including state-of-the-science capabiliti...

  16. Independent component analysis (ICA) algorithms for improved spectral deconvolution of overlapped signals in 1H NMR analysis: application to foods and related products.

    PubMed

    Monakhova, Yulia B; Tsikin, Alexey M; Kuballa, Thomas; Lachenmeier, Dirk W; Mushtakova, Svetlana P

    2014-05-01

    The major challenge facing NMR spectroscopic mixture analysis is the overlapping of signals and the arising impossibility to easily recover the structures for identification of the individual components and to integrate separated signals for quantification. In this paper, various independent component analysis (ICA) algorithms [mutual information least dependent component analysis (MILCA); stochastic non-negative ICA (SNICA); joint approximate diagonalization of eigenmatrices (JADE); and robust, accurate, direct ICA algorithm (RADICAL)] as well as deconvolution methods [simple-to-use-interactive self-modeling mixture analysis (SIMPLISMA) and multivariate curve resolution-alternating least squares (MCR-ALS)] are applied for simultaneous (1)H NMR spectroscopic determination of organic substances in complex mixtures. Among others, we studied constituents of the following matrices: honey, soft drinks, and liquids used in electronic cigarettes. Good quality spectral resolution of up to eight-component mixtures was achieved (correlation coefficients between resolved and experimental spectra were not less than 0.90). In general, the relative errors in the recovered concentrations were below 12%. SIMPLISMA and MILCA algorithms were found to be preferable for NMR spectra deconvolution and showed similar performance. The proposed method was used for analysis of authentic samples. The resolved ICA concentrations match well with the results of reference gas chromatography-mass spectrometry as well as the MCR-ALS algorithm used for comparison. ICA deconvolution considerably improves the application range of direct NMR spectroscopy for analysis of complex mixtures. PMID:24604756

  17. A Numerical Approach to Ion Channel Modelling Using Whole-Cell Voltage-Clamp Recordings and a Genetic Algorithm

    PubMed Central

    Gurkiewicz, Meron; Korngreen, Alon

    2007-01-01

    The activity of trans-membrane proteins such as ion channels is the essence of neuronal transmission. The currently most accurate method for determining ion channel kinetic mechanisms is single-channel recording and analysis. Yet, the limitations and complexities in interpreting single-channel recordings discourage many physiologists from using them. Here we show that a genetic search algorithm in combination with a gradient descent algorithm can be used to fit whole-cell voltage-clamp data to kinetic models with a high degree of accuracy. Previously, ion channel stimulation traces were analyzed one at a time, the results of these analyses being combined to produce a picture of channel kinetics. Here the entire set of traces from all stimulation protocols are analysed simultaneously. The algorithm was initially tested on simulated current traces produced by several Hodgkin-Huxley–like and Markov chain models of voltage-gated potassium and sodium channels. Currents were also produced by simulating levels of noise expected from actual patch recordings. Finally, the algorithm was used for finding the kinetic parameters of several voltage-gated sodium and potassium channels models by matching its results to data recorded from layer 5 pyramidal neurons of the rat cortex in the nucleated outside-out patch configuration. The minimization scheme gives electrophysiologists a tool for reproducing and simulating voltage-gated ion channel kinetics at the cellular level. PMID:17784781

  18. Numerical simulation of two-dimensional heat transfer in composite bodies with application to de-icing of aircraft components. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Chao, D. F. K.

    1983-01-01

    Transient, numerical simulations of the de-icing of composite aircraft components by electrothermal heating were performed for a two dimensional rectangular geometry. The implicit Crank-Nicolson formulation was used to insure stability of the finite-difference heat conduction equations and the phase change in the ice layer was simulated using the Enthalpy method. The Gauss-Seidel point iterative method was used to solve the system of difference equations. Numerical solutions illustrating de-icer performance for various composite aircraft structures and environmental conditions are presented. Comparisons are made with previous studies. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.

  19. A Numerical Study of Material Parameter Sensitivity in the Production of Hard Metal Components Using Powder Compaction

    NASA Astrophysics Data System (ADS)

    Andersson, Daniel C.; Lindskog, Per; Staf, Hjalmar; Larsson, Per-Lennart

    2014-06-01

    Modeling of hard metal powder inserts is analyzed based on a continuum mechanics approach. In particular, one commonly used cutting insert geometry is studied. For a given advanced constitutive description of the powder material, the material parameter space required to accurately model the mechanical behavior is determined. These findings are then compared with the corresponding parameter space that can possibly be determined from a combined numerical/experimental analysis of uniaxial die powder compaction utilizing inverse modeling. The analysis is pertinent to a particular WC/Co powder and the finite element method is used in the numerical investigations of the mechanical behavior of the cutting insert.

  20. A spectral tau algorithm based on Jacobi operational matrix for numerical solution of time fractional diffusion-wave equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.; Doha, E. H.; Baleanu, D.; Ezz-Eldien, S. S.

    2015-07-01

    In this paper, an efficient and accurate spectral numerical method is presented for solving second-, fourth-order fractional diffusion-wave equations and fractional wave equations with damping. The proposed method is based on Jacobi tau spectral procedure together with the Jacobi operational matrix for fractional integrals, described in the Riemann-Liouville sense. The main characteristic behind this approach is to reduce such problems to those of solving systems of algebraic equations in the unknown expansion coefficients of the sought-for spectral approximations. The validity and effectiveness of the method are demonstrated by solving five numerical examples. Numerical examples are presented in the form of tables and graphs to make comparisons with the results obtained by other methods and with the exact solutions more easier.

  1. Two- and Three-Dimensional Numerical Experiments Representing Two Limiting Cases of an In-Line Pair of Finger Seal Components

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Steinetz, B. M.; Kudriavtsev, V. V.; Proctor, M. P.; Kiraly, L. James (Technical Monitor)

    2002-01-01

    The work presented here concerns the numerical development and simulation of the flow, pressure patterns and motion of a pair of fingers arranged behind each other and axially aligned in-line. The fingers represent the basic elemental component of a Finger Seal (FS) and form a tight seal around the rotor. Yet their flexibility allows compliance with rotor motion and in a passive-adaptive mode complies also with the hydrodynamic forces induced by the flowing fluid. While the paper does not treat the actual staggered configuration of a finger seal, the inline arrangement represents a first step towards that final goal. The numerical 2-D (axial-radial) and 3-D results presented herein were obtained using a commercial package (CFD-ACE+). Both models use an integrated numerical approach, which couples the hydrodynamic fluid model (Navier-Stokes based) to the solid mechanics code that models the compliance of the fingers.

  2. Contrasting sediment melt and fluid signatures for magma components in the Aeolian Arc: Implications for numerical modeling of subduction systems

    NASA Astrophysics Data System (ADS)

    Zamboni, Denis; Gazel, Esteban; Ryan, Jeffrey G.; Cannatelli, Claudia; Lucchi, Federico; Atlas, Zachary D.; Trela, Jarek; Mazza, Sarah E.; De Vivo, Benedetto

    2016-06-01

    The complex geodynamic evolution of Aeolian Arc in the southern Tyrrhenian Sea resulted in melts with some of the most pronounced along the arc geochemical variation in incompatible trace elements and radiogenic isotopes worldwide, likely reflecting variations in arc magma source components. Here we elucidate the effects of subducted components on magma sources along different sections of the Aeolian Arc by evaluating systematics of elements depleted in the upper mantle but enriched in the subducting slab, focusing on a new set of B, Be, As, and Li measurements. Based on our new results, we suggest that both hydrous fluids and silicate melts were involved in element transport from the subducting slab to the mantle wedge. Hydrous fluids strongly influence the chemical composition of lavas in the central arc (Salina) while a melt component from subducted sediments probably plays a key role in metasomatic reactions in the mantle wedge below the peripheral islands (Stromboli). We also noted similarities in subducting components between the Aeolian Archipelago, the Phlegrean Fields, and other volcanic arcs/arc segments around the world (e.g., Sunda, Cascades, Mexican Volcanic Belt). We suggest that the presence of melt components in all these locations resulted from an increase in the mantle wedge temperature by inflow of hot asthenospheric material from tears/windows in the slab or from around the edges of the sinking slab.

  3. Numerical Investigation of Thermal Distribution and Pressurization Behavior in Helium Pressurized Cryogenic Tank by Introducing a Multi-component Model

    NASA Astrophysics Data System (ADS)

    Lei, Wang; Yanzhong, Li; Zhan, Liu; Kang, Zhu

    An improved CFD model involving a multi-component gas mixturein the ullage is constructed to predict the pressurization behavior of a cryogenic tank considering the existence of pressurizing helium.A temperature difference between the local fluid and its saturation temperature corresponding to the vapor partial pressure is taken as the phase change driving force. As practical application of the model, hydrogen and oxygen tanks with helium pressurization arenumerically simulated by using themulti-component gas model. The results presentthat the improved model produce higher ullage temperature and pressure and lower wall temperaturethan those without multi-component consideration. The phase change has a slight influence on thepressurization performance due to the small quantities involved.

  4. Middle atmosphere project: A radiative heating and cooling algorithm for a numerical model of the large scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    Wehrbein, W. M.; Leovy, C. B.

    1981-01-01

    A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.

  5. A Numerical Algorithm to Calculate the Pressure Distribution of the TPS Front End Due to Desorption Induced by Synchrotron Radiation

    SciTech Connect

    Sheng, I. C.; Kuan, C. K.; Chen, Y. T.; Yang, J. Y.; Hsiung, G. Y.; Chen, J. R.

    2010-06-23

    The pressure distribution is an important aspect of a UHV subsystem in either a storage ring or a front end. The design of the 3-GeV, 400-mA Taiwan Photon Source (TPS) foresees outgassing induced by photons and due to a bending magnet and an insertion device. An algorithm to calculate the photon-stimulated absorption (PSD) due to highly energetic radiation from a synchrotron source is presented. Several results using undulator sources such as IU20 are also presented, and the pressure distribution is illustrated.

  6. Numerical analysis of second harmonic generation for THz-wave in a photonic crystal waveguide using a nonlinear FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Saito, Kyosuke; Tanabe, Tadao; Oyama, Yutaka

    2016-04-01

    We have presented a numerical analysis to describe the behavior of a second harmonic generation (SHG) in THz regime by taking into account for both linear and nonlinear optical susceptibility. We employed a nonlinear finite-difference-time-domain (nonlinear FDTD) method to simulate SHG output characteristics in THz photonic crystal waveguide based on semi insulating gallium phosphide crystal. Unique phase matching conditions originated from photonic band dispersions with low group velocity are appeared, resulting in SHG output characteristics. This numerical study provides spectral information of SHG output in THz PC waveguide. THz PC waveguides is one of the active nonlinear optical devices in THz regime, and nonlinear FDTD method is a powerful tool to design photonic nonlinear THz devices.

  7. Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media

    NASA Technical Reports Server (NTRS)

    Stamnes, Knut; Tsay, S.-CHEE; Jayaweera, Kolf; Wiscombe, Warren

    1988-01-01

    The transfer of monochromatic radiation in a scattering, absorbing, and emitting plane-parallel medium with a specified bidirectional reflectivity at the lower boundary is considered. The equations and boundary conditions are summarized. The numerical implementation of the theory is discussed with attention given to the reliable and efficient computation of eigenvalues and eigenvectors. Ways of avoiding fatal overflows and ill-conditioning in the matrix inversion needed to determine the integration constants are also presented.

  8. Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media

    NASA Astrophysics Data System (ADS)

    Stamnes, Knut; Tsay, S.-Chee; Jayaweera, Kolf; Wiscombe, Warren

    1988-06-01

    The transfer of monochromatic radiation in a scattering, absorbing, and emitting plane-parallel medium with a specified bidirectional reflectivity at the lower boundary is considered. The equations and boundary conditions are summarized. The numerical implementation of the theory is discussed with attention given to the reliable and efficient computation of eigenvalues and eigenvectors. Ways of avoiding fatal overflows and ill-conditioning in the matrix inversion needed to determine the integration constants are also presented.

  9. 3D-radiation hydro simulations of disk-planet interactions. I. Numerical algorithm and test cases

    NASA Astrophysics Data System (ADS)

    Klahr, H.; Kley, W.

    2006-01-01

    We study the evolution of an embedded protoplanet in a circumstellar disk using the 3D-Radiation Hydro code TRAMP, and treat the thermodynamics of the gas properly in three dimensions. The primary interest of this work lies in the demonstration and testing of the numerical method. We show how far numerical parameters can influence the simulations of gap opening. We study a standard reference model under various numerical approximations. Then we compare the commonly used locally isothermal approximation to the radiation hydro simulation using an equation for the internal energy. Models with different treatments of the mass accretion process are compared. Often mass accumulates in the Roche lobe of the planet creating a hydrostatic atmosphere around the planet. The gravitational torques induced by the spiral pattern of the disk onto the planet are not strongly affected in the average magnitude, but the short time scale fluctuations are stronger in the radiation hydro models. An interesting result of this work lies in the analysis of the temperature structure around the planet. The most striking effect of treating the thermodynamics properly is the formation of a hot pressure-supported bubble around the planet with a pressure scale height of H/R ≈ 0.5 rather than a thin Keplerian circumplanetary accretion disk.

  10. A high-order numerical algorithm for DNS of low-Mach-number reactive flows with detailed chemistry and quasi-spectral accuracy

    NASA Astrophysics Data System (ADS)

    Motheau, E.; Abraham, J.

    2016-05-01

    A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.

  11. A numerical study of fixed frequency reflectometry measurements of plasma filaments with radial and poloidal velocity components

    NASA Astrophysics Data System (ADS)

    Vicente, J.; da Silva, F.; Heuraux, S.; Manso, M. E.; Conway, G. D.; Silva, C.

    2014-11-01

    A 2D finite-differences time-domain full-wave code is used to simulate the measurements of plasma filaments with fixed frequency O-mode reflectometry. The plasma is modeled by a linear slab plasma plus a Gaussian perturbation propagating in a direction that can vary from poloidal to radial. The plasma background density gradient is chosen in agreement with the steep edge transport barrier of H-modes in the ASDEX Upgrade (AUG) tokamak. Illustrative results are presented and different types of reflectometry responses are observed depending on filament sizes and propagation directions. The reflectometry signatures obtained here with numerical simulations support previous experimental findings on filament measurements.

  12. Numerical modeling of submarine landslide-generated tsunamis as a component of the Alaska Tsunami Inundation Mapping Project

    USGS Publications Warehouse

    Suleimani, E.; Lee, H.; Haeussler, Peter J.; Hansen, R.

    2006-01-01

    Tsunami waves are a threat for manyAlaska coastal locations, and community preparedness plays an important role in saving lives and property. The GeophysicalInstitute of the University of Alaska Fairbanks participates in the National Tsunami Hazard Mitigation Program by evaluating andmapping potential tsunami inundation of selected coastal communities in Alaska. We develop hypothetical tsunamiscenarios based on the parameters of potential underwater earthquakes and landslides for a specified coastal community. The modeling results are delivered to the community for localtsunami hazard planning and construction of evacuation maps. For the community of Seward, located at the head of Resurrection Bay, tsunami potential from tectonic and submarinelandslide sources must be evaluated for comprehensiveinundation mapping. Recent multi-beam and high-resolution sub-bottom profile surveys of Resurrection Bay show medium- and large-sized blocks, which we interpret as landslide debris that slid in the 1964 earthquake. Numerical modeling of the 1964 underwater slides and tsunamis will help to validate and improve the models. In order to construct tsunami inundation maps for Seward, we combine two different approaches for estimating tsunami risk. First, we observe inundation and runup due to tsunami waves generated by the 1964 earthquake. Next we model tsunami wave dynamics in Resurrection Bay caused by superposition of the local landslide- generated waves and the major tectonic tsunami. We compare modeled and observed values from 1964 to calibrate the numerical tsunami model. In our second approach, we perform a landslide tsunami hazard assessment using underwater slope stability analysis and available characteristics of potentially unstable sediment bodies. The approach produces hypothetical underwater slides and resulting tsunami waves. We use a three-dimensional numerical model of an incompressible viscous slide with full interaction between the slide

  13. A numerical stress based approach for predicting failure in NBG-18 nuclear graphite components with verification problems

    NASA Astrophysics Data System (ADS)

    Hindley, Michael P.; Mitchell, Mark N.; Erasmus, Christiaan; McMurtry, Ross; Becker, Thorsten H.; Blaine, Deborah C.; Groenwold, Albert A.

    2013-05-01

    This paper presents a methodology that can be used for calculating the probability of failure of graphite core components in a nuclear core design, such as that of the Pebble Bed Modular Reactor. The proposed methodology is shown to calculate the failure of multiple geometries using the parameters obtained from tensile specimen test data. Experimental testing of various geometries is undertaken to verify the results. The analysis of the experimental results and a discussion on the accuracy of the failure prediction methodology are presented. The analysis is done at 50% probability of failure as well as lower probabilities of failure.

  14. Experimental and numerical investigation of flow field and heat transfer from electronic components in a rectangular channel with an impinging jet

    NASA Astrophysics Data System (ADS)

    Calisir, Tamer; Fevzi Koseoglu, M.; Kilic, Mustafa; Baskaya, Senol

    2015-05-01

    Thermal control of electronic components is a continuously emerging problem as power loads keep increasing. The present study is mainly focused on experimental and numerical investigation of impinging jet cooling of 18 (3 × 6 array) flash mounted electronic components under a constant heat flux condition inside a rectangular channel in which air, following impingement, is forced to exit in a single direction along the channel formed by the jet orifice plate and impingement plate. Copper blocks represent heat dissipating electronic components. Inlet flow velocities to the channel were measured by using a Laser Doppler Anemometer (LDA) system. Flow field observations were performed using a Particle Image Velocimetry (PIV) and thermocouples were used for temperature measurements. Experiments and simulations were conducted for Re = 4000 - 8000 at fixed value of H = 10 × Dh. Flow field results were presented and heat transfer results were interpreted using the flow measurement observations. Numerical results were validated with experimental data and it was observed that the results are in agreement with the experiments.

  15. A Numerical Feasibility Study of Three-Component Induction Logging for Three Dimensional Imaging About a Single Borehole

    SciTech Connect

    ALUMBAUGH, DAVID L.; WILT, MICHAEL J.

    1999-08-01

    A theoretical analysis has been completed for a proposed induction logging tool designed to yield data which are used to generate three dimensional images of the region surrounding a well bore. The proposed tool consists of three mutually orthogonal magnetic dipole sources and multiple 3 component magnetic field receivers offset at different distances from the source. The initial study employs sensitivity functions which are derived by applying the Born Approximation to the integral equation that governs the magnetic fields generated by a magnetic dipole source located within an inhomogeneous medium. The analysis has shown that the standard coaxial configuration, where the magnetic moments of both the source and the receiver are aligned with the axis of the well bore, offers the greatest depth of sensitivity away from the borehole compared to any other source-receiver combination. In addition this configuration offers the best signal-to-noise characteristics. Due to the cylindrically symmetric nature of the tool sensitivity about the borehole, the data generated by this configuration can only be interpreted in terms of a two-dimensional cylindrical model. For a fill 3D interpretation the two radial components of the magnetic field that are orthogonal to each other must be measured. Coil configurations where both the source and receiver are perpendicular to the tool axis can also be employed to increase resolution and provide some directional information, but they offer no true 3D information.

  16. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem

    2015-10-01

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  17. Direct Numerical Simulation of Boiling Multiphase Flows: State-of-the-Art, Modeling, Algorithmic and Computer Needs

    SciTech Connect

    Nourgaliev R.; Knoll D.; Mousseau V.; Berry R.

    2007-04-01

    The state-of-the-art for Direct Numerical Simulation (DNS) of boiling multiphase flows is reviewed, focussing on potential of available computational techniques, the level of current success for their applications to model several basic flow regimes (film, pool-nucleate and wall-nucleate boiling -- FB, PNB and WNB, respectively). Then, we discuss multiphysics and multiscale nature of practical boiling flows in LWR reactors, requiring high-fidelity treatment of interfacial dynamics, phase-change, hydrodynamics, compressibility, heat transfer, and non-equilibrium thermodynamics and chemistry of liquid/vapor and fluid/solid-wall interfaces. Finally, we outline the framework for the {\\sf Fervent} code, being developed at INL for DNS of reactor-relevant boiling multiphase flows, with the purpose of gaining insight into the physics of multiphase flow regimes, and generating a basis for effective-field modeling in terms of its formulation and closure laws.

  18. Linearized iterative least-squares (LIL): a parameter-fitting algorithm for component separation in multifrequency cosmic microwave background experiments such as Planck

    NASA Astrophysics Data System (ADS)

    Khatri, Rishi

    2015-08-01

    We present an efficient algorithm for least-squares parameter fitting, optimized for component separation in multifrequency cosmic microwave background (CMB) experiments. We sidestep some of the problems associated with non-linear optimization by taking advantage of the quasi-linear nature of the foreground model. We demonstrate our algorithm, linearized iterative least-squares (LIL), on the publicly available Planck sky model FFP6 simulations and compare our results with those of other algorithms. We work at full Planck resolution and show that degrading the resolution of all channels to that of the lowest frequency channel is not necessary. Finally, we present results for publicly available Planck data. Our algorithm is extremely fast, fitting six parameters to the seven lowest Planck channels at full resolution (50 million pixels) in less than 160 CPU minutes (or a few minutes running in parallel on a few tens of cores). LIL is therefore easily scalable to future experiments, which may have even higher resolution and more frequency channels. We also, naturally, propagate the uncertainties in different parameters due to noise in the maps, as well as the degeneracies between the parameters, to the final errors in the parameters using the Fisher matrix. One indirect application of LIL could be a front-end for Bayesian parameter fitting to find the maximum likelihood to be used as the starting point for Gibbs sampling. We show that for rare components, such as carbon monoxide emission, present in a small fraction of sky, the optimal approach should combine parameter fitting with model selection. LIL may also be useful in other astrophysical applications that satisfy quasi-linearity criteria.

  19. Update of upper level turbulence forecast by reducing unphysical components of topography in the numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Park, Sang-Hun; Kim, Jung-Hoon; Sharman, Robert D.; Klemp, Joseph B.

    2016-07-01

    On 2 November 2015, unrealistically large areas of light-or-stronger turbulence were predicted by the WRF-RAP (Weather Research and Forecast Rapid Refresh)-based operational turbulence forecast system over the western U.S. mountainous regions, which were not supported by available observations. These areas are reduced by applying additional terrain averaging, which damps out the unphysical components of small-scale (~2Δx) energy aloft induced by unfiltered topography in the initialization of the WRF model. First, a control simulation with the same design of the WRF-RAP model shows that the large-scale atmospheric conditions are well simulated but predict strong turbulence over the western mountainous region. Four experiments with different levels of additional terrain smoothing are applied in the initialization of the model integrations, which significantly reduce spurious mountain-wave-like features, leading to better turbulence forecasts more consistent with the observed data.

  20. Analysis of algorithms predicting blood:air and tissue:blood partition coefficients from solvent partition coefficients for prevalent components of JP-8 jet fuel.

    PubMed

    Sterner, Teresa R; Goodyear, Charles D; Robinson, Peter J; Mattie, David R; Burton, G Allen

    2006-08-01

    Algorithms predicting tissue and blood partition coefficients (PCs) from solvent properties were compared to assess their usefulness in a petroleum mixture physiologically based pharmacokinetic/pharmacodynamic model. Measured blood:air and tissue:blood PCs for rat and human tissues were sought from literature resources for 14 prevalent jet fuel (JP-8) components. Average experimental PCs were compared with predicted PCs calculated using algorithms from 9 published sources. Algorithms chosen used solvent PCs (octanol:water, saline or water:air, oil:air coefficients) due to the relative accessibility of these parameters. Tissue:blood PCs were calculated from ratios of predicted tissue:air and experimental blood:air values (PCEB). Of the 231 calculated values, 27% performed within +/- 20% of the experimental PC values. Physiologically based equations (based on water and lipid components of a tissue type) did not perform as well as empirical equations (derived from linear regression of experimental PC data) and hybrid equations (physiological parameters and empirical factors combined) for the jet fuel components. The major limitation encountered in this analysis was the lack of experimental data for the selected JP-8 constituents. PCEB values were compared with tissue:blood PCs calculated from ratios of predicted tissue:air and predicted blood:air values (PCPB). Overall, 68% of PCEB values had smaller absolute % errors than PCPB values. If calculated PC values must be used in models, a comparison of experimental and predicted PCs for chemically similar compounds would estimate the expected error level in calculated values. PMID:16766479

  1. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGESBeta

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  2. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  3. Experimental and numerical investigations on tailored tempering process of a U-channel component with tailored mechanical properties

    SciTech Connect

    Tang, B. T.; Bruschi, S.; Ghiotti, A.; Bariani, P. F.

    2013-12-16

    Hot stamping of quenchenable ultra high strength steels currently represents a promising forming technology for the manufacturing of safety and crash relevant parts. For some applications, such as B-pillars and other structural components that may undergo impact loading, it may be desirable to create regions of the part with tailored mechanical properties. In the paper, a laboratory-scale hot stamped U-channel was manufactured by using a segmented die, which was heated by cartridge heaters and cooled by water channels independently. Local hardness values as low as 289 HV can be achieved using a heated die temperature of 400°C while maintaining a hardness level of 490 HV in the fully cooled region. If the die temperature was increased to 450°C, the Vickers hardness of elements in the heated region was 227 HV, with a reduction in hardness of more than 50%. Optical microscopy was used to verify the microstructure of the as-quenched phases with respect to the heated die temperatures. The FE model of the lab-scale process was developed to capture the overall hardness trends that were observed in the experiments.

  4. Technical Note: A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, M.; Schulz-Hanke, M.; Garcia Alba, J.; Jurisch, N.; Hagemann, U.; Sachs, T.; Sommer, M.; Augustin, J.

    2015-08-01

    Processes driving the production, transformation and transport of methane (CH4) in wetland ecosystems are highly complex. Thus, serious challenges are constitutes in terms of the mechanistic process understanding, the identification of potential environmental drivers and the calculation of reliable CH4 emission estimates. We present a simple calculation algorithm to separate open-water CH4 fluxes measured with automatic chambers into diffusion- and ebullition-derived components, which helps facilitating the identification of underlying dynamics and potential environmental drivers. Flux separation is based on ebullition related sudden concentration changes during single measurements. A variable ebullition filter is applied, using the lower and upper quartile and the interquartile range (IQR). Automation of data processing is achieved by using an established R-script, adjusted for the purpose of CH4 flux calculation. The algorithm was tested using flux measurement data (July to September 2013) from a former fen grassland site, converted into a shallow lake as a result of rewetting ebullition and diffusion contributed 46 and 55 %, respectively, to total CH4 emissions, which is comparable to those previously reported by literature. Moreover, the separation algorithm revealed a concealed shift in the diurnal trend of diffusive fluxes throughout the measurement period.

  5. Quasi-analytical determination of noise-induced error limits in lidar retrieval of aerosol backscatter coefficient by the elastic, two-component algorithm.

    PubMed

    Sicard, Michaël; Comerón, Adolfo; Rocadenbosch, Francisco; Rodríguez, Alejandro; Muñoz, Constantino

    2009-01-10

    The elastic, two-component algorithm is the most common inversion method for retrieving the aerosol backscatter coefficient from ground- or space-based backscatter lidar systems. A quasi-analytical formulation of the statistical error associated to the aerosol backscatter coefficient caused by the use of real, noise-corrupted lidar signals in the two-component algorithm is presented. The error expression depends on the signal-to-noise ratio along the inversion path and takes into account "instantaneous" effects, the effect of the signal-to-noise ratio at the range where the aerosol backscatter coefficient is being computed, as well as "memory" effects, namely, both the effect of the signal-to-noise ratio in the cell where the inversion is started and the cumulative effect of the noise between that cell and the actual cell where the aerosol backscatter coefficient is evaluated. An example is shown to illustrate how the "instantaneous" effect is reduced when averaging the noise-contaminated signal over a number of cells around the range where the inversion is started. PMID:19137026

  6. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  7. A real-time algorithm for the harmonic estimation and frequency tracking of dominant components in fusion plasma magnetic diagnostics

    SciTech Connect

    Alves, D.; Coelho, R. [Associação Euratom Collaboration: JET-EFDA Contributors

    2013-08-15

    The real-time tracking of instantaneous quantities such as frequency, amplitude, and phase of components immerse in noisy signals has been a common problem in many scientific and engineering fields such as power systems and delivery, telecommunications, and acoustics for the past decades. In magnetically confined fusion research, extracting this sort of information from magnetic signals can be of valuable assistance in, for instance, feedback control of detrimental magnetohydrodynamic modes and disruption avoidance mechanisms by monitoring instability growth or anticipating mode-locking events. This work is focused on nonlinear Kalman filter based methods for tackling this problem. Similar methods have already proven their merits and have been successfully employed in this scientific domain in applications such as amplitude demodulation for the motional Stark effect diagnostic. In the course of this work, three approaches are described, compared, and discussed using magnetic signals from the Joint European Torus tokamak plasma discharges for benchmarking purposes.

  8. Thermodynamic Calculation of n-COMPONENT Eutectic Mixtures

    NASA Astrophysics Data System (ADS)

    Brunet, L.; Caillard, J.; André, P.

    This paper presents a simple numerical method to calculate the eutectic mixture composition and melting temperature. Using a Newton-Raphson method to solve the nonlinear problem, the calculation is possible for n-component eutectic. We tested this algorithm on inorganic and organic mixtures. A better correlation between experimental and numerical results has been found for organic compound.

  9. Delaunay algorithm and principal component analysis for 3D visualization of mitochondrial DNA nucleoids by Biplane FPALM/dSTORM.

    PubMed

    Alán, Lukáš; Špaček, Tomáš; Ježek, Petr

    2016-07-01

    Data segmentation and object rendering is required for localization super-resolution microscopy, fluorescent photoactivation localization microscopy (FPALM), and direct stochastic optical reconstruction microscopy (dSTORM). We developed and validated methods for segmenting objects based on Delaunay triangulation in 3D space, followed by facet culling. We applied them to visualize mitochondrial nucleoids, which confine DNA in complexes with mitochondrial (mt) transcription factor A (TFAM) and gene expression machinery proteins, such as mt single-stranded-DNA-binding protein (mtSSB). Eos2-conjugated TFAM visualized nucleoids in HepG2 cells, which was compared with dSTORM 3D-immunocytochemistry of TFAM, mtSSB, or DNA. The localized fluorophores of FPALM/dSTORM data were segmented using Delaunay triangulation into polyhedron models and by principal component analysis (PCA) into general PCA ellipsoids. The PCA ellipsoids were normalized to the smoothed volume of polyhedrons or by the net unsmoothed Delaunay volume and remodeled into rotational ellipsoids to obtain models, termed DVRE. The most frequent size of ellipsoid nucleoid model imaged via TFAM was 35 × 45 × 95 nm; or 35 × 45 × 75 nm for mtDNA cores; and 25 × 45 × 100 nm for nucleoids imaged via mtSSB. Nucleoids encompassed different point density and wide size ranges, speculatively due to different activity stemming from different TFAM/mtDNA stoichiometry/density. Considering twofold lower axial vs. lateral resolution, only bulky DVRE models with an aspect ratio >3 and tilted toward the xy-plane were considered as two proximal nucleoids, suspicious occurring after division following mtDNA replication. The existence of proximal nucleoids in mtDNA-dSTORM 3D images of mtDNA "doubling"-supported possible direct observations of mt nucleoid division after mtDNA replication. PMID:26846371

  10. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  11. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  12. Optimizing the fabrication process and interplay of device components of polymer solar cells using a field-based multiscale solar-cell algorithm

    NASA Astrophysics Data System (ADS)

    Donets, Sergii; Pershin, Anton; Baeurle, Stephan A.

    2015-05-01

    Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes

  13. Optimizing the fabrication process and interplay of device components of polymer solar cells using a field-based multiscale solar-cell algorithm

    SciTech Connect

    Donets, Sergii; Pershin, Anton; Baeurle, Stephan A.

    2015-05-14

    Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes

  14. High order hybrid numerical simulations of two dimensional detonation waves

    NASA Technical Reports Server (NTRS)

    Cai, Wei

    1993-01-01

    In order to study multi-dimensional unstable detonation waves, a high order numerical scheme suitable for calculating the detailed transverse wave structures of multidimensional detonation waves was developed. The numerical algorithm uses a multi-domain approach so different numerical techniques can be applied for different components of detonation waves. The detonation waves are assumed to undergo an irreversible, unimolecular reaction A yields B. Several cases of unstable two dimensional detonation waves are simulated and detailed transverse wave interactions are documented. The numerical results show the importance of resolving the detonation front without excessive numerical viscosity in order to obtain the correct cellular patterns.

  15. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  16. Design of a flexible component gathering algorithm for converting cell-based models to graph representations for use in evolutionary search

    PubMed Central

    2014-01-01

    Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell

  17. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  18. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  19. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1995-01-01

    Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

  20. Local multiplicative Schwarz algorithms for convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  1. Programmer's guide for LIFE2's rainflow counting algorithm

    SciTech Connect

    Schluter, L.L.

    1991-01-01

    The LIFE2 computer code is a fatique/fracture analysis code that is specialized to the analysis of wind turbine components. The numerical formulation of the code uses a series of cycle count matrices to describe the cyclic stress states imposed upon the turbine. In this formulation, each stress cycle is counted or binsed'' according to the magnitude of its mean stress and alternating stress components and by the operating condition of the turbine. A set of numerical algorithms has been incorporated into the LIFE2 code. These algorithms determine the cycle count matrices for a turbine component using stress-time histories of the imposed stress states. This paper describes the design decisions that were made and explains the implementation of these algorithms using Fortran 77. 7 refs., 7 figs.

  2. A preliminary numerical evaluation of a parallel algorithm for approximating the values and subgradients of the recourse function in a stochastic program with complete recourse

    SciTech Connect

    Lessor, K.S.

    1988-08-26

    The parallel algorithm of Ariyawansa, Sorensen, and Wets for approximating the values and subgradients of the recourse function in a stochastic program with complete recourse is implemented and timing results are reported for limited experimental trials. 14 refs., 6 figs., 8 tabs.

  3. A method of obtaining signal components of residual carrier signal with their power content and computer simulation

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1993-01-01

    A novel algorithm to obtain all signal components of a residual carrier signal with any number of channels is presented. The phase modulation type may be NRZ-L or split phase (Manchester). The algorithm also provides a simple way to obtain the power contents of the signal components. Steps to recognize the signal components that influence the carrier tracking loop and the data tracking loop at the receiver are given. A computer program for numerical computation is also provided.

  4. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    SciTech Connect

    Razali, Azhani Mohd Abdullah, Jaafar

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  5. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    NASA Astrophysics Data System (ADS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  6. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  7. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  8. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  9. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  10. Evaluation of measured and simulated turbulent components of a snow cover energy balance model in order to refine the turbulent transfer algorithm

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Energy balance models use physically based principles to simulate snow cover accumulation and melt. Snobal, a snow cover energy balance model, uses a flux-profile approach to calculating the turbulent flux (sensible and latent heat flux) components of the energy balance. Historically, validation dat...

  11. Numerical analysis of the harmonic components of the Bragg wavelength content in spectral responses of apodized fiber Bragg gratings written by means of a phase mask with a variable phase step height.

    PubMed

    Osuch, Tomasz

    2016-02-01

    The influence of the complex interference patterns created by a phase mask with variable diffraction efficiency in apodized fiber Bragg grating (FBGs) formation on their reflectance spectra is studied. The effect of the significant contributions of the zeroth and higher (m>±1) diffraction orders on the Bragg wavelength peak and its harmonic components is analyzed numerically. The results obtained for Gaussian and tanh apodization profiles are compared with similar data calculated for a uniform grating. It is demonstrated that when an apodized FBG is written using a phase mask with variable diffraction efficiency, significant enhancement of the harmonic components and a reduction of the Bragg wavelength peak in the grating spectral response are observed. This is particularly noticeable for the Gaussian apodization profile due to the substantial contributions of phase mask sections with relatively small phase steps in the FBG formation. PMID:26831768

  12. Novel pure component contribution, mean centering of ratio spectra and factor based algorithms for simultaneous resolution and quantification of overlapped spectral signals: An application to recently co-formulated tablets of chlorzoxazone, aceclofenac and paracetamol

    NASA Astrophysics Data System (ADS)

    Toubar, Safaa S.; Hegazy, Maha A.; Elshahed, Mona S.; Helmy, Marwa I.

    2016-06-01

    In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253 nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30 μg mL- 1 for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10 μg mL- 1. The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets.

  13. Novel pure component contribution, mean centering of ratio spectra and factor based algorithms for simultaneous resolution and quantification of overlapped spectral signals: An application to recently co-formulated tablets of chlorzoxazone, aceclofenac and paracetamol.

    PubMed

    Toubar, Safaa S; Hegazy, Maha A; Elshahed, Mona S; Helmy, Marwa I

    2016-06-15

    In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30μgmL(-1) for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10μgmL(-1). The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets. PMID:27038581

  14. Sensor failure detection and isolation in flexible structures using the eigensystem realization algorithm

    NASA Astrophysics Data System (ADS)

    Zimmerman, David C.; Lyde, Terri L.

    Sensor failure detection and isolation (FDI) for flexible structures is approached from a system realization perspective. Instead of using hardware or analytical model redundancy, system realization is utilized to provide an experimental model based redundancy. The FDI algorithm utilizes the eigensystem realization algorithm to determine a minimum-order state space realization of the structure in the presence of noisy measurements. The FDI algorithm utilizes statistical comparisons of successive realizations to detect and isolate the failed sensor component. Due to the nature in which the FDI algorithm is formulated, it is also possible to classify the failure mode of the sensor. Results are presented using both numerically simulated and actual experimental data.

  15. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  16. Sequential transformation for multiple traits for estimation of (co)variance components with a derivative-free algorithm for restricted maximum likelihood.

    PubMed

    Van Vleck, L D; Boldman, K G

    1993-04-01

    Transformation of multiple-trait records that undergo sequential selection can be used with derivative-free algorithms to maximize the restricted likelihood in estimation of covariance matrices as with derivative methods. Data transformation with appropriate parts of the Choleski decomposition of the current estimate of the residual covariance matrix results in mixed-model equations that are easily modified from round to round for calculation of the logarithm of the likelihood. The residual sum of squares is the same for transformed and untransformed analyses. Most importantly, the logarithm of the determinant of the untransformed coefficient matrix is an easily determined function of the Choleski decomposition of the residual covariance matrix and the determinant of the transformed coefficient matrix. Thus, the logarithm of the likelihood for any combination of covariance matrices can be determined from the transformed equations. Advantages of transformation are 1) the multiple-trait mixed-model equations are easy to set up, 2) the least squares part of the equations does not change from round to round, 3) right-hand sides change from round to round by constant multipliers, and 4) less memory is required. An example showed only a slight advantage of the transformation compared with no transformation in terms of solution time for each round (1 to 5%). PMID:8478285

  17. Numerical analysis of bifurcations

    SciTech Connect

    Guckenheimer, J.

    1996-06-01

    This paper is a brief survey of numerical methods for computing bifurcations of generic families of dynamical systems. Emphasis is placed upon algorithms that reflect the structure of the underlying mathematical theory while retaining numerical efficiency. Significant improvements in the computational analysis of dynamical systems are to be expected from more reliance of geometric insight coming from dynamical systems theory. {copyright} {ital 1996 American Institute of Physics.}

  18. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  19. A Numerical Study of Superconducting Cavity Components

    SciTech Connect

    B.C. Yunn; J.J. Bisognano

    1990-09-10

    Computer programs which solve Maxwell's equations in three dimensions are becoming an invaluable tool in the design of RF structures for particle accelerators. In particular, the lack of cylindrical symmetry of superconducting cavities with waveguide couplers demands a 3-D analysis for a reasonable description of a number of important phenomena. A set of codes, collectively known as MAFIA, developed by Weiland and his collaborators, has been used at CEBAF to study its five-cell superconducting accelerating cavities. The magnitude of RF crosstalk between cavities is found to depend critically on the breaking of cylindrical symmetry by the fundamental power couplers. A model of the higher order mode coupler exhibits an unexpected mode which is in good agreement with measurement.

  20. Efficient sequential and parallel algorithms for record linkage

    PubMed Central

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837

  1. Classification of gasoline data obtained by gas chromatography using a piecewise alignment algorithm combined with feature selection and principal component analysis

    SciTech Connect

    Pierce, Karisa M.; Hope, Janiece L.; Johnson, Kevin J.; Wright, Bob W.; Synovec, Robert E.

    2005-11-25

    A fast and objective chemometric classification method is developed and applied to the analysis of gas chromatography (GC) data from five commercial gasoline samples. The gasoline samples serve as model mixtures, whereas the focus is on the development and demonstration of the classification method. The method is based on objective retention time alignment (referred to as piecewise alignment) coupled with analysis of variance (ANOVA) feature selection prior to classification by principal component analysis (PCA) using optimal parameters. The degree-of-class-separation is used as a metric to objectively optimize the alignment and feature selection parameters using a suitable training set thereby reducing user subjectivity, as well as to indicate the success of the PCA clustering and classification. The degree-of-class-separation is calculated using Euclidean distances between the PCA scores of a subset of the replicate runs from two of the five fuel types, i.e., the training set. The unaligned training set that was directly submitted to PCA had a low degree-of-class-separation (0.4), and the PCA scores plot for the raw training set combined with the raw test set failed to correctly cluster the five sample types. After submitting the training set to piecewise alignment, the degree-of-class-separation increased (1.2), but when the same alignment parameters were applied to the training set combined with the test set, the scores plot clustering still did not yield five distinct groups. Applying feature selection to the unaligned training set increased the degree-of-class-separation (4.8), but chemical variations were still obscured by retention time variation and when the same feature selection conditions were used for the training set combined with the test set, only one of the five fuels was clustered correctly. However, piecewise alignment coupled with feature selection yielded a reasonably optimal degree-of-class-separation for the training set (9.2), and when the

  2. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  3. Numerical quadrature for slab geometry transport algorithms

    SciTech Connect

    Hennart, J.P.; Valle, E. del

    1995-12-31

    In recent papers, a generalized nodal finite element formalism has been presented for virtually all known linear finite difference approximations to the discrete ordinates equations in slab geometry. For a particular angular directions {mu}, the neutron flux {Phi} is approximated by a piecewise function Oh, which over each space interval can be polynomial or quasipolynomial. Here we shall restrict ourselves to the polynomial case. Over each space interval, {Phi} is a polynomial of degree k, interpolating parameters given by in the continuous and discontinuous cases, respectively. The angular flux at the left and right ends and the k`th Legendre moment of {Phi} over the cell considered are represented as.

  4. New Advances In Multiphase Flow Numerical Modelling Using A General Domain Decomposition and Non-orthogonal Collocated Finite Volume Algorithm: Application To Industrial Fluid Catalytical Cracking Process and Large Scale Geophysical Fluids.

    NASA Astrophysics Data System (ADS)

    Martin, R.; Gonzalez Ortiz, A.

    In the industry as well as in the geophysical community, multiphase flows are mod- elled using a finite volume approach and a multicorrector algorithm in time in order to determine implicitly the pressures, velocities and volume fractions for each phase. Pressures, and velocities are generally determined at mid-half mesh step from each other following the staggered grid approach. This ensures stability and prevents os- cillations in pressure. It allows to treat almost all the Reynolds number ranges for all speeds and viscosities. The disadvantages appear when we want to treat more complex geometries or if a generalized curvilinear formulation of the conservation equations is considered. Too many interpolations have to be done and accuracy is then lost. In order to overcome these problems, we use here a similar algorithm in time and a Rhie and Chow interpolation (1983) of the collocated variables and essentially the velocities at the interface. The Rhie and Chow interpolation of the velocities at the finite volume interfaces allows to have no oscillatons of the pressure without checkerboard effects and to stabilize all the algorithm. In a first predictor step, fluxes at the interfaces of the finite volumes are then computed using 2nd and 3rd order shock capturing schemes of MUSCL/TVD or Van Leer type, and the orthogonal stress components are treated implicitly while cross viscous/diffusion terms are treated explicitly. A pentadiagonal system in 2D or a septadiagonal in 3D must be solve but here we have chosen to solve 3 tridiagonal linear systems (the so called Alternate Direction Implicit algorithm), one in each spatial direction, to reduce the cost of computation. Then a multi-correction of interpolated velocities, pressures and volumic fractions of each phase are done in the cartesian frame or the deformed local curvilinear coordinate system till convergence and mass conservation. At the end the energy conservation equations are solved. In all this process the

  5. Robustness of Flexible Systems With Component-Level Uncertainties

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.

    2000-01-01

    Robustness of flexible systems in the presence of model uncertainties at the component level is considered. Specifically, an approach for formulating robustness of flexible systems in the presence of frequency and damping uncertainties at the component level is presented. The synthesis of the components is based on a modifications of a controls-based algorithm for component mode synthesis. The formulation deals first with robustness of synthesized flexible systems. It is then extended to deal with global (non-synthesized ) dynamic models with component-level uncertainties by projecting uncertainties from component levels to system level. A numerical example involving a two-dimensional simulated docking problem is worked out to demonstrate the feasibility of the proposed approach.

  6. Interpolation on the manifold of K component GMMs

    PubMed Central

    Kim, Hyunwoo J.; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C.; Singh, Vikas

    2016-01-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest. PMID:27042169

  7. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  8. Robust volume calculations for Constructive Solid Geometry (CSG) components in Monte Carlo transport calculations

    SciTech Connect

    Millman, D. L.; Griesheimer, D. P.; Nease, B. R.; Snoeyink, J.

    2012-07-01

    In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)

  9. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  10. Note on symmetric BCJ numerator

    NASA Astrophysics Data System (ADS)

    Fu, Chih-Hao; Du, Yi-Jian; Feng, Bo

    2014-08-01

    We present an algorithm that leads to BCJ numerators satisfying manifestly the three properties proposed by Broedel and Carrasco in [42]. We explicitly calculate the numerators at 4, 5 and 6-points and show that the relabeling property is generically satisfied.

  11. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    2002-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  12. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow, and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  13. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  14. Applications of a generalized pressure correction algorithm for flows in complicated geometries

    NASA Astrophysics Data System (ADS)

    Shyy, W.; Braaten, M. E.

    An overview is given of recent progress in developing a unified numerical algorithm capable of solving flow over a wide range of Mach and Reynolds numbers in complex geometries. The algorithm is based on the pressure correction method, combined treatment of the Cartesian and contravariant velocity components on arbitrary coordinates, and second-order accurate discretization. A number of two- and three-dimensional flow problems including the effects of electric currents, turbulence, combustion, multiple phases, and compressibility are presented to demonstrate the capability of the present algorithm. Some related technical issues, such as the skewness of the grid distribution and the promise of parallel computation, are also addressed.

  15. A Hybrid Shortest Path Algorithm for Navigation System

    NASA Astrophysics Data System (ADS)

    Cho, Hsun-Jung; Lan, Chien-Lun

    2007-12-01

    Combined with Geographic Information System (GIS) and Global Positioning System (GPS), the vehicle navigation system had become a quite popular product in daily life. A key component of the navigation system is the Shortest Path Algorithm. Navigation in real world must face a network consists of tens of thousands nodes and links, and even more. Under the limited computation capability of vehicle navigation equipment, it is difficult to satisfy the realtime response requirement that user expected. Hence, this study focused on shortest path algorithm that enhances the computation speed with less memory requirement. Several well-known algorithms such as Dijkstra, A* and hierarchical concepts were integrated to build hybrid algorithms that reduce searching space and improve searching speed. Numerical examples were conducted on Taiwan highway network that consists of more than four hundred thousands of links and nearly three hundred thousands of nodes. This real network was divided into two connected sub-networks (layers). The upper layer is constructed by freeways and expressways; the lower layer is constructed by local networks. Test origin-destination pairs were chosen randomly and divided into three distance categories; short, medium and long distances. The evaluation of outcome is judged by actual length and travel time. The numerical example reveals that the hybrid algorithm proposed by this research might be tens of thousands times faster than traditional Dijkstra algorithm; the memory requirement of the hybrid algorithm is also much smaller than the tradition algorithm. This outcome shows that this proposed algorithm would have an advantage over vehicle navigation system.

  16. Component model reduction via the projection and assembly method

    NASA Technical Reports Server (NTRS)

    Bernard, Douglas E.

    1989-01-01

    The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.

  17. Study on the variable cycle engine modeling techniques based on the component method

    NASA Astrophysics Data System (ADS)

    Zhang, Lihua; Xue, Hui; Bao, Yuhai; Li, Jijun; Yan, Lan

    2016-01-01

    Based on the structure platform of the gas turbine engine, the components of variable cycle engine were simulated by using the component method. The mathematical model of nonlinear equations correspondeing to each component of the gas turbine engine was established. Based on Matlab programming, the nonlinear equations were solved by using Newton-Raphson steady-state algorithm, and the performance of the components for engine was calculated. The numerical simulation results showed that the model bulit can describe the basic performance of the gas turbine engine, which verified the validity of the model.

  18. Computational Algorithms for Device-Circuit Coupling

    SciTech Connect

    KEITER, ERIC R.; HUTCHINSON, SCOTT A.; HOEKSTRA, ROBERT J.; RANKIN, ERIC LAMONT; RUSSO, THOMAS V.; WATERS, LON J.

    2003-01-01

    Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. Similarly, device-scale simulation tools (e.g., DaVinci) are commonly used in the design of individual semiconductor components. Some problems, such as single-event upset (SEU), require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (sometimes conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the partial differential equations (PDE) device, while optimizing the numerics for both.

  19. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  20. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  1. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  2. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  3. Frontiers in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Evans, Charles R.; Finn, Lee S.; Hobill, David W.

    2011-06-01

    Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics

  4. Finding apparent horizons in numerical relativity

    NASA Astrophysics Data System (ADS)

    Thornburg, Jonathan

    1996-10-01

    We review various algorithms for finding apparent horizons in 3+1 numerical relativity. We then focus on one particular algorithm, in which we pose the apparent horizon equation H≡∇ini+Kijninj-K=0 as a nonlinear elliptic (boundary-value) PDE on angular-coordinate space for the horizon shape function r=h(θ,φ), finite difference this PDE, and use Newton's method or a variant to solve the finite difference equations. We describe a method for computing the Jacobian matrix of the finite differenced H(h) sH (sh) function by symbolically differentiating the finite difference equations, giving the Jacobian elements directly in terms of the finite difference molecule coefficients used in computing sH (sh). Assuming the finite differencing scheme commutes with linearization, we show how the Jacobian elements may be computed by first linearizing the continuum H(h) equations, then finite differencing the linearized continuum equations. (This is essentially just the ``Jacobian part'' of the Newton-Kantorovich method for solving nonlinear PDEs.) We tabulate the resulting Jacobian coefficients for a number of different sH (sh) and Jacobian computation schemes. We find this symbolic differentiation method of computing the Jacobian to be much more efficient than the usual numerical-perturbation method, and also much easier to implement than is commonly thought. When solving the discrete sH (sh)=0 equations, we find that Newton's method generally shows robust convergence. However, we find that it has a small (poor) radius of convergence if the initial guess for the horizon position contains significant high-spatial-frequency error components, i.e., angular Fourier components varying as (say) cosmθ with m>~8. (Such components occur naturally if spacetime contains significant amounts of high-frequency gravitational radiation.) We show that this poor convergence behavior is not an artifact of insufficient resolution in the finite difference grid; rather, it appears to be caused

  5. CO component estimation based on the independent component analysis

    SciTech Connect

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  6. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  7. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  8. Cold-standby redundancy allocation problem with degrading components

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Xiong, Junlin; Xie, Min

    2015-11-01

    Components in cold-standby state are usually assumed to be as good as new when they are activated. However, even in a standby environment, the components will suffer from performance degradation. This article presents a study of a redundancy allocation problem (RAP) for cold-standby systems with degrading components. The objective of the RAP is to determine an optimal design configuration of components to maximize system reliability subject to system resource constraints (e.g. cost, weight). As in most cases, it is not possible to obtain a closed-form expression for this problem, and hence, an approximated objective function is presented. A genetic algorithm with dual mutation is developed to solve such a constrained optimization problem. Finally, a numerical example is given to illustrate the proposed solution methodology.

  9. Efficient multicomponent fuel algorithm

    NASA Astrophysics Data System (ADS)

    Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.

    2003-03-01

    We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.

  10. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  11. Numerical anomalies mimicking physical effects

    NASA Astrophysics Data System (ADS)

    Menikoff, R.

    Numerical simulations of flows with shock waves typically use finite-difference shock-capturing algorithms. These algorithms give a shock a numerical width in order to generate the entropy increase that must occur across a shock wave. For algorithms in conservation form, steady-state shock waves are insensitive to the numerical dissipation because of the Hugoniot jump conditions. However, localized numerical errors occur when shock waves interact. Examples are the 'excess wall heating' in the Noh problem (shock reflected from rigid wall), errors when a shock impacts a material interface or an abrupt change in mesh spacing, and the start-up error from initializing a shock as a discontinuity. This class of anomalies can be explained by the entropy generation that occurs in the transient flow when a shock profile is formed or changed. The entropy error is localized spatially but under mesh refinement does not decrease in magnitude. Similar effects have been observed in shock tube experiments with partly dispersed shock waves. In this case, the shock has a physical width due to a relaxation process. An entropy anomaly from a transient shock interaction is inherent in the structure of the conservation equations for fluid flow. The anomaly can be expected to occur whenever heat conduction can be neglected and a shock wave has a non-zero width, whether the width is physical or numerical. Thus, the numerical anomaly from an artificial shock width mimics a real physical effect.

  12. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  13. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  14. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  15. Dynamics of Numerics and CFD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    This lecture attempts to illustrate the basic ideas of how the recent advances in nonlinear dynamical systems theory (dynamics) can provide new insights into the understanding of numerical algorithms used in solving nonlinear differential equations (DEs). Examples will be given of the use of dynamics to explain unusual phenomena that occur in numerics. The inadequacy of the use of linearized analysis for the understanding of long time behavior of nonlinear problems will be illustrated, and the role of dynamics in studying the nonlinear stability, accuracy, convergence property and efficiency of using time- dependent approaches to obtaining steady-state numerical solutions in computational fluid dynamics (CFD) will briefly be explained.

  16. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  17. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  18. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. I - The dynamics of time discretization and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1991-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  19. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. Part 1: The ODE connection and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1990-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  20. COMPARING NUMERICAL METHODS FOR ISOTHERMAL MAGNETIZED SUPERSONIC TURBULENCE

    SciTech Connect

    Kritsuk, Alexei G.; Collins, David; Norman, Michael L.; Xu Hao E-mail: dccollins@lanl.gov

    2011-08-10

    Many astrophysical applications involve magnetized turbulent flows with shock waves. Ab initio star formation simulations require a robust representation of supersonic turbulence in molecular clouds on a wide range of scales imposing stringent demands on the quality of numerical algorithms. We employ simulations of supersonic super-Alfvenic turbulence decay as a benchmark test problem to assess and compare the performance of nine popular astrophysical MHD methods actively used to model star formation. The set of nine codes includes: ENZO, FLASH, KT-MHD, LL-MHD, PLUTO, PPML, RAMSES, STAGGER, and ZEUS. These applications employ a variety of numerical approaches, including both split and unsplit, finite difference and finite volume, divergence preserving and divergence cleaning, a variety of Riemann solvers, and a range of spatial reconstruction and time integration techniques. We present a comprehensive set of statistical measures designed to quantify the effects of numerical dissipation in these MHD solvers. We compare power spectra for basic fields to determine the effective spectral bandwidth of the methods and rank them based on their relative effective Reynolds numbers. We also compare numerical dissipation for solenoidal and dilatational velocity components to check for possible impacts of the numerics on small-scale density statistics. Finally, we discuss the convergence of various characteristics for the turbulence decay test and the impact of various components of numerical schemes on the accuracy of solutions. The nine codes gave qualitatively the same results, implying that they are all performing reasonably well and are useful for scientific applications. We show that the best performing codes employ a consistently high order of accuracy for spatial reconstruction of the evolved fields, transverse gradient interpolation, conservation law update step, and Lorentz force computation. The best results are achieved with divergence-free evolution of the

  1. Fast algorithms for transport models

    SciTech Connect

    Manteuffel, T.A.

    1992-12-01

    The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).

  2. Non-iterative conductivity reconstruction algorithm using projected current density in MREIT

    NASA Astrophysics Data System (ADS)

    Nam, Hyun Soo; Park, Chunjae; In Kwon, Oh

    2008-12-01

    Magnetic resonance electrical impedance tomography (MREIT) is to visualize the current density and the conductivity distribution in an electrical object Ω using the measured magnetic flux data by an MRI scanner. MREIT uses only one component Bz of the magnetic flux density B = (Bx, By, Bz) generated by an injected electrical current into the object. In this paper, we propose a fast and direct non-iterative algorithm to reconstruct the internal conductivity distribution in Ω with the measured Bz data. To develop the algorithm, we investigate the relation between the projected current density JP, a uniquely determined component of J by the map from current J to measured Bz data and the isotropic conductivity. Three-dimensional numerical simulations and phantom experiments are studied to show the feasibility of the proposed method by comparing with those using the conventional iterative harmonic Bz algorithm.

  3. Numerical inversion of finite Toeplitz matrices and vector Toeplitz matrices

    NASA Technical Reports Server (NTRS)

    Bareiss, E. H.

    1969-01-01

    Numerical technique increases the efficiencies of the numerical methods involving Toeplitz matrices by reducing the number of multiplications required by an N-order Toeplitz matrix from N-cubed to N-squared multiplications. Some efficient algorithms are given.

  4. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  5. Force-Control Algorithm for Surface Sampling

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Quadrelli, Marco B.; Phan, Linh

    2008-01-01

    A G-FCON algorithm is designed for small-body surface sampling. It has a linearization component and a feedback component to enhance performance. The algorithm regulates the contact force between the tip of a robotic arm attached to a spacecraft and a surface during sampling.

  6. Variable Selection using MM Algorithms

    PubMed Central

    Hunter, David R.; Li, Runze

    2009-01-01

    Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786

  7. A genetic-algorithm-based method to find unitary transformations for any desired quantum computation and application to a one-bit oracle decision problem

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Yoo, Seokwon

    2014-12-01

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the "genetic parameter vector" of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the corresponding quantum algorithms for a realistic problem, the one-bit oracle decision problem, or the often-called Deutsch problem. By numerical simulations, we can faithfully find the appropriate unitary transformations to solve the problem by using our method. We analyze the quantum algorithms identified by the found unitary transformations and generalize the variant models of the original Deutsch's algorithm.

  8. Kernel Near Principal Component Analysis

    SciTech Connect

    MARTIN, SHAWN B.

    2002-07-01

    We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.

  9. Brain components

    MedlinePlus Videos and Cool Tools

    The brain is composed of more than a thousand billion neurons. Specific groups of them, working in concert, provide ... of information. The 3 major components of the brain are the cerebrum, cerebellum, and brain stem. The ...

  10. Probability tree algorithm for general diffusion processes

    NASA Astrophysics Data System (ADS)

    Ingber, Lester; Chen, Colleen; Mondescu, Radu Paul; Muzzall, David; Renedo, Marco

    2001-11-01

    Motivated by path-integral numerical solutions of diffusion processes, PATHINT, we present a tree algorithm, PATHTREE, which permits extremely fast accurate computation of probability distributions of a large class of general nonlinear diffusion processes.

  11. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  12. Ab initio two-component Ehrenfest dynamics

    NASA Astrophysics Data System (ADS)

    Ding, Feizhi; Goings, Joshua J.; Liu, Hongbin; Lingerfelt, David B.; Li, Xiaosong

    2015-09-01

    We present an ab initio two-component Ehrenfest-based mixed quantum/classical molecular dynamics method to describe the effect of nuclear motion on the electron spin dynamics (and vice versa) in molecular systems. The two-component time-dependent non-collinear density functional theory is used for the propagation of spin-polarized electrons while the nuclei are treated classically. We use a three-time-step algorithm for the numerical integration of the coupled equations of motion, namely, the velocity Verlet for nuclear motion, the nuclear-position-dependent midpoint Fock update, and the modified midpoint and unitary transformation method for electronic propagation. As a test case, the method is applied to the dissociation of H2 and O2. In contrast to conventional Ehrenfest dynamics, this two-component approach provides a first principles description of the dynamics of non-collinear (e.g., spin-frustrated) magnetic materials, as well as the proper description of spin-state crossover, spin-rotation, and spin-flip dynamics by relaxing the constraint on spin configuration. This method also holds potential for applications to spin transport in molecular or even nanoscale magnetic devices.

  13. Ab initio two-component Ehrenfest dynamics

    SciTech Connect

    Ding, Feizhi; Goings, Joshua J.; Liu, Hongbin; Lingerfelt, David B.; Li, Xiaosong

    2015-09-21

    We present an ab initio two-component Ehrenfest-based mixed quantum/classical molecular dynamics method to describe the effect of nuclear motion on the electron spin dynamics (and vice versa) in molecular systems. The two-component time-dependent non-collinear density functional theory is used for the propagation of spin-polarized electrons while the nuclei are treated classically. We use a three-time-step algorithm for the numerical integration of the coupled equations of motion, namely, the velocity Verlet for nuclear motion, the nuclear-position-dependent midpoint Fock update, and the modified midpoint and unitary transformation method for electronic propagation. As a test case, the method is applied to the dissociation of H{sub 2} and O{sub 2}. In contrast to conventional Ehrenfest dynamics, this two-component approach provides a first principles description of the dynamics of non-collinear (e.g., spin-frustrated) magnetic materials, as well as the proper description of spin-state crossover, spin-rotation, and spin-flip dynamics by relaxing the constraint on spin configuration. This method also holds potential for applications to spin transport in molecular or even nanoscale magnetic devices.

  14. Numerical studies of constraints and gravitational wave extraction in general relativity

    NASA Astrophysics Data System (ADS)

    Fiske, David Robert

    Within classical physics, general relativity is the theory of gravity. Its equations are non-linear partial differential equations for which relatively few closed form solutions are known. Because of the growing observational need for solutions representing gravitational waves from astrophysically plausible sources, a subfield of general relativity; numerical relativity, has a emerged with the goal of generating numerical solutions to the Einstein equations. This dissertation focuses on two fundamental problems in modern numerical relativity: (1)Creating a theoretical treatment of the constraints in the presence of constraint-violating numerical errors, and (2)Designing and implementing an algorithm to compute the spherical harmonic decomposition of radiation quantities for comparison with observation. On the issue of the constraints, I present a novel and generic procedure for incorporating the constraints into the equations of motion of the theory in a way designed to make the constraint hypersurface an attractor of the evolution. In principle, the prescription generates non- linear corrections for the Einstein equations. The dissertation presents numerical evidence that the correction terms do work in the case of two formulations of the Maxwell equations and two formulations of the linearized Einstein equations. On the issue of radiation extraction, I provide the first in-depth analysis of a novel algorithm, due originally to Misner, for computing spherical harmonic components on a cubic grid. I compute explicitly how the truncation error in the algorithm depends on its various parameters, and I also provide a detailed analysis showing how to implement the method on grids in which explicit symmetries are enforced via boundary conditions. Finally, I verify these error estimates and symmetry arguments with a numerical study using a solution of the linearized Einstein equations known as a Teukolsky wave. The algorithm performs well and the estimates prove true both

  15. Spurious Numerical Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1995-01-01

    Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.

  16. Statistical algorithms for a comprehensive test ban treaty discrimination framework

    SciTech Connect

    Foote, N.D.; Anderson, D.N.; Higbee, K.T.; Miller, N.E.; Redgate, T.; Rohay, A.C.; Hagedorn, D.N.

    1996-10-01

    Seismic discrimination is the process of identifying a candidate seismic event as an earthquake or explosion using information from seismic waveform features (seismic discriminants). In the CTBT setting, low energy seismic activity must be detected and identified. A defensible CTBT discrimination decision requires an understanding of false-negative (declaring an event to be an earthquake given it is an explosion) and false-position (declaring an event to be an explosion given it is an earthquake) rates. These rates are derived from a statistical discrimination framework. A discrimination framework can be as simple as a single statistical algorithm or it can be a mathematical construct that integrates many different types of statistical algorithms and CTBT technologies. In either case, the result is the identification of an event and the numerical assessment of the accuracy of an identification, that is, false-negative and false-positive rates. In Anderson et al., eight statistical discrimination algorithms are evaluated relative to their ability to give results that effectively contribute to a decision process and to be interpretable with physical (seismic) theory. These algorithms can be discrimination frameworks individually or components of a larger framework. The eight algorithms are linear discrimination (LDA), quadratic discrimination (QDA), variably regularized discrimination (VRDA), flexible discrimination (FDA), logistic discrimination, K-th nearest neighbor (KNN), kernel discrimination, and classification and regression trees (CART). In this report, the performance of these eight algorithms, as applied to regional seismic data, is documented. Based on the findings in Anderson et al. and this analysis: CART is an appropriate algorithm for an automated CTBT setting.

  17. Accelerating Numerical Calculation on the Cray XMT

    SciTech Connect

    Scherrer, Chad; Shippert, Timothy R.; Marquez, Andres

    2009-05-25

    The Cray XMT provides hardware support for parallel algorithms that would be communication- or memory-bound on other machines. Unfortunately, even if an algorithm meets these criteria, performance suffers if the algorithm is too numerically intensive. We present a lookup-based approach that achieves a significant performance advantage over explicit calculation. We describe an approach to balancing memory bandwidth against on-chip floating point capabilities, leading to further speedup. Finally, we provide table lookup algorithms for a number of common functions.

  18. Sequential and Parallel Algorithms for Spherical Interpolation

    NASA Astrophysics Data System (ADS)

    De Rossi, Alessandra

    2007-09-01

    Given a large set of scattered points on a sphere and their associated real values, we analyze sequential and parallel algorithms for the construction of a function defined on the sphere satisfying the interpolation conditions. The algorithms we implemented are based on a local interpolation method using spherical radial basis functions and the Inverse Distance Weighted method. Several numerical results show accuracy and efficiency of the algorithms.

  19. Cuba: Multidimensional numerical integration library

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  20. MFIX documentation numerical technique

    SciTech Connect

    Syamlal, M.

    1998-01-01

    MFIX (Multiphase Flow with Interphase eXchanges) is a general-purpose hydrodynamic model for describing chemical reactions and heat transfer in dense or dilute fluid-solids flows, which typically occur in energy conversion and chemical processing reactors. The calculations give time-dependent information on pressure, temperature, composition, and velocity distributions in the reactors. The theoretical basis of the calculations is described in the MFIX Theory Guide. Installation of the code, setting up of a run, and post-processing of results are described in MFIX User`s manual. Work was started in April 1996 to increase the execution speed and accuracy of the code, which has resulted in MFIX 2.0. To improve the speed of the code the old algorithm was replaced by a more implicit algorithm. In different test cases conducted the new version runs 3 to 30 times faster than the old version. To increase the accuracy of the computations, second order accurate discretization schemes were included in MFIX 2.0. Bubbling fluidized bed simulations conducted with a second order scheme show that the predicted bubble shape is rounded, unlike the (unphysical) pointed shape predicted by the first order upwind scheme. This report describes the numerical technique used in MFIX 2.0.

  1. Semi-blind signal extraction for communication signals by combining independent component analysis and spatial constraints.

    PubMed

    Wang, Xiang; Huang, Zhitao; Zhou, Yiyu

    2012-01-01

    Signal of interest (SOI) extraction is a vital issue in communication signal processing. In this paper, we propose two novel iterative algorithms for extracting SOIs from instantaneous mixtures, which explores the spatial constraint corresponding to the Directions of Arrival (DOAs) of the SOIs as a priori information into the constrained Independent Component Analysis (cICA) framework. The first algorithm utilizes the spatial constraint to form a new constrained optimization problem under the previous cICA framework which requires various user parameters, i.e., Lagrange parameter and threshold measuring the accuracy degree of the spatial constraint, while the second algorithm incorporates the spatial constraints to select specific initialization of extracting vectors. The major difference between the two novel algorithms is that the former incorporates the prior information into the learning process of the iterative algorithm and the latter utilizes the prior information to select the specific initialization vector. Therefore, no extra parameters are necessary in the learning process, which makes the algorithm simpler and more reliable and helps to improve the speed of extraction. Meanwhile, the convergence condition for the spatial constraints is analyzed. Compared with the conventional techniques, i.e., MVDR, numerical simulation results demonstrate the effectiveness, robustness and higher performance of the proposed algorithms. PMID:23012531

  2. Semi-Blind Signal Extraction for Communication Signals by Combining Independent Component Analysis and Spatial Constraints

    PubMed Central

    Wang, Xiang; Huang, Zhitao; Zhou, Yiyu

    2012-01-01

    Signal of interest (SOI) extraction is a vital issue in communication signal processing. In this paper, we propose two novel iterative algorithms for extracting SOIs from instantaneous mixtures, which explores the spatial constraint corresponding to the Directions of Arrival (DOAs) of the SOIs as a priori information into the constrained Independent Component Analysis (cICA) framework. The first algorithm utilizes the spatial constraint to form a new constrained optimization problem under the previous cICA framework which requires various user parameters, i.e., Lagrange parameter and threshold measuring the accuracy degree of the spatial constraint, while the second algorithm incorporates the spatial constraints to select specific initialization of extracting vectors. The major difference between the two novel algorithms is that the former incorporates the prior information into the learning process of the iterative algorithm and the latter utilizes the prior information to select the specific initialization vector. Therefore, no extra parameters are necessary in the learning process, which makes the algorithm simpler and more reliable and helps to improve the speed of extraction. Meanwhile, the convergence condition for the spatial constraints is analyzed. Compared with the conventional techniques, i.e., MVDR, numerical simulation results demonstrate the effectiveness, robustness and higher performance of the proposed algorithms. PMID:23012531

  3. Scientific Software Component Technology

    SciTech Connect

    Kohn, S.; Dykman, N.; Kumfert, G.; Smolinski, B.

    2000-02-16

    We are developing new software component technology for high-performance parallel scientific computing to address issues of complexity, re-use, and interoperability for laboratory software. Component technology enables cross-project code re-use, reduces software development costs, and provides additional simulation capabilities for massively parallel laboratory application codes. The success of our approach will be measured by its impact on DOE mathematical and scientific software efforts. Thus, we are collaborating closely with library developers and application scientists in the Common Component Architecture forum, the Equation Solver Interface forum, and other DOE mathematical software groups to gather requirements, write and adopt a variety of design specifications, and develop demonstration projects to validate our approach. Numerical simulation is essential to the science mission at the laboratory. However, it is becoming increasingly difficult to manage the complexity of modern simulation software. Computational scientists develop complex, three-dimensional, massively parallel, full-physics simulations that require the integration of diverse software packages written by outside development teams. Currently, the integration of a new software package, such as a new linear solver library, can require several months of effort. Current industry component technologies such as CORBA, JavaBeans, and COM have all been used successfully in the business domain to reduce software development costs and increase software quality. However, these existing industry component infrastructures will not scale to support massively parallel applications in science and engineering. In particular, they do not address issues related to high-performance parallel computing on ASCI-class machines, such as fast in-process connections between components, language interoperability for scientific languages such as Fortran, parallel data redistribution between components, and massively

  4. Operator induced multigrid algorithms using semirefinement

    NASA Technical Reports Server (NTRS)

    Decker, Naomi; Vanrosendale, John

    1989-01-01

    A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two and three dimensional model problems are presented, together with a two level analysis explaining these results.

  5. Operator induced multigrid algorithms using semirefinement

    NASA Technical Reports Server (NTRS)

    Decker, Naomi Henderson; Van Rosendale, John

    1989-01-01

    A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two- and three-dimensional model problems are presented, together with a two level analysis explaining these results.

  6. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  7. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  8. Improved piecewise orthogonal signal correction algorithm.

    PubMed

    Feudale, Robert N; Tan, Huwei; Brown, Steven D

    2003-10-01

    Piecewise orthogonal signal correction (POSC), an algorithm that performs local orthogonal filtering, was recently developed to process spectral signals. POSC was shown to improve partial leastsquares regression models over models built with conventional OSC. However, rank deficiencies within the POSC algorithm lead to artifacts in the filtered spectra when removing two or more POSC components. Thus, an updated OSC algorithm for use with the piecewise procedure is reported. It will be demonstrated how the mathematics of this updated OSC algorithm were derived from the previous version and why some OSC versions may not be as appropriate to use with the piecewise modeling procedure as the algorithm reported here. PMID:14639746

  9. An improved algorithm for geocentric to geodetic coordinate conversion

    SciTech Connect

    Toms, R.

    1996-02-01

    The problem of performing transformations from geocentric to geodetic coordinates has received an inordinate amount of attention in the literature. Numerous approximate methods have been published. Almost none of the publications address the issue of efficiency and in most cases there is a paucity of error analysis. Recently there has been a surge of interest in this problem aimed at developing more efficient methods for real time applications such as DIS. Iterative algorithms have been proposed that are not of optimal efficiency, address only one error component and require a small but uncertain number of relatively expensive iterations for convergence. In a recent paper published by the author a new algorithm was proposed for the transformation of geocentric to geodetic coordinates. The new algorithm was tested at the Visual Systems Laboratory at the Institute for Simulation and Training, the University of Central Florida, and found to be 30 percent faster than the best previously published algorithm. In this paper further improvements are made in terms of efficiency. For completeness and to make this paper more readable, it was decided to revise the previous paper and to publish it as a new report. The introduction describes the improvements in more detail.

  10. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  11. Component separations.

    PubMed

    Heller, Lior; McNichols, Colton H; Ramirez, Oscar M

    2012-02-01

    Component separation is a technique used to provide adequate coverage for midline abdominal wall defects such as a large ventral hernia. This surgical technique is based on subcutaneous lateral dissection, fasciotomy lateral to the rectus abdominis muscle, and dissection on the plane between external and internal oblique muscles with medial advancement of the block that includes the rectus muscle and its fascia. This release allows for medial advancement of the fascia and closure of up to 20-cm wide defects in the midline area. Since its original description, components separation technique underwent multiple modifications with the ultimate goal to decrease the morbidity associated with the traditional procedure. The extensive subcutaneous lateral dissection had been associated with ischemia of the midline skin edges, wound dehiscence, infection, and seroma. Although the current trend is to proceed with minimally invasive component separation and to reinforce the fascia with mesh, the basic principles of the techniques as described by Ramirez et al in 1990 have not changed over the years. Surgeons who deal with the management of abdominal wall defects are highly encouraged to include this technique in their collection of treatment options. PMID:23372455

  12. Hyperfrequency components

    NASA Astrophysics Data System (ADS)

    1994-09-01

    The document has a collection of 19 papers (11 on technologies, 8 on applications) by 26 authors and coauthors. Technological topics include: evolution from conventional HEMT's double heterojunction and planar types of pseudomorphic HEMT's; MMIC R&D and production aspects for very-low-noise, low-power, and very-low-noise, high-power applications; hyperfrequency CAD tools; parametric measurements of hyperfrequency components on plug-in cards for design and in-process testing uses; design of Class B power amplifiers and millimetric-wave, bigrid-transistor mixers, exemplifying combined use of three major types of physical simulation in electrical modeling of microwave components; FET's for power amplification at up to 110 GHz; production, characterization, and nonlinear applications of resonant tunnel diodes. Applications topics include: development of active modules for major European programs; tubes versus solid-state components in hyperfrequency applications; status and potentialities of national and international cooperative R&D on MMIC's and CAD of hyperfrequency circuitry; attainable performance levels in multifunction MMIC applications; state of the art relative of MESFET power amplifiers (Bands S, C, X, Ku); creating a hyperfrequency functions library, of parametrizable reference cells or macrocells; and design of a single-stage, low-noise, band-W amplifier toward development of a three-stage amplifier.

  13. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  14. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  15. An accurate product SVD (singular value decomposition) algorithm

    SciTech Connect

    Bojanczyk, A.W.; Luk, F.T. . School of Electrical Engineering); Ewerbring, M. ); Van Dooren, P. )

    1990-01-01

    In this paper, we propose a new algorithm for computing a singular value decomposition of a product of three matrices. We show that our algorithm is numerically desirable in that all relevant residual elements will be numerically small. 12 refs., 1 tab.

  16. Scientific Component Technology Initiative

    SciTech Connect

    Kohn, S; Bosl, B; Dahlgren, T; Kumfert, G; Smith, S

    2003-02-07

    The laboratory has invested a significant amount of resources towards the development of high-performance scientific simulation software, including numerical libraries, visualization, steering, software frameworks, and physics packages. Unfortunately, because this software was not designed for interoperability and re-use, it is often difficult to share these sophisticated software packages among applications due to differences in implementation language, programming style, or calling interfaces. This LDRD Strategic Initiative investigated and developed software component technology for high-performance parallel scientific computing to address problems of complexity, re-use, and interoperability for laboratory software. Component technology is an extension of scripting and object-oriented software development techniques that specifically focuses on the needs of software interoperability. Component approaches based on CORBA, COM, and Java technologies are widely used in industry; however, they do not support massively parallel applications in science and engineering. Our research focused on the unique requirements of scientific computing on ASCI-class machines, such as fast in-process connections among components, language interoperability for scientific languages, and data distribution support for massively parallel SPMD components.

  17. Stabilizing the Richardson eigenvector algorithm by controlling chaos

    SciTech Connect

    He, S.

    1997-03-01

    By viewing the operations of the Richardson purification algorithm as a discrete time dynamical process, we propose a method to overcome the instability of this eigenvector algorithm by controlling chaos. We present theoretical analysis and numerical results on the behavior and performance of the stabilized algorithm. {copyright} {ital 1997 American Institute of Physics.}

  18. An algorithm for the automatic synchronization of Omega receivers

    NASA Technical Reports Server (NTRS)

    Stonestreet, W. M.; Marzetta, T. L.

    1977-01-01

    The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.

  19. A quadratic weight selection algorithm. [for optimal flight control

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.

    1981-01-01

    A new numerical algorithm is presented which determines a positive semi-definite state weighting matrix in the linear-quadratic optimal control design problem. The algorithm chooses the weighting matrix by placing closed-loop eigenvalues and eigenvectors near desired locations using optimal feedback gains. A simplified flight control design example is used to illustrate the algorithms capabilities.

  20. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    This paper presents a reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. The algorithm has been included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  1. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  2. Application of variance components estimation to calibrate geoid error models.

    PubMed

    Guo, Dong-Mei; Xu, Hou-Ze

    2015-01-01

    The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model. PMID:26306296

  3. Numerical taxonomy on data: Experimental results

    SciTech Connect

    Cohen, J.; Farach, M.

    1997-12-01

    The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.

  4. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  5. Propagation of numerical noise in particle-in-cell tracking

    NASA Astrophysics Data System (ADS)

    Kesting, Frederik; Franchetti, Giuliano

    2015-11-01

    Particle-in-cell (PIC) is the most used algorithm to perform self-consistent tracking of intense charged particle beams. It is based on depositing macroparticles on a grid, and subsequently solving on it the Poisson equation. It is well known that PIC algorithms occupy intrinsic limitations as they introduce numerical noise. Although not significant for short-term tracking, this becomes important in simulations for circular machines over millions of turns as it may induce artificial diffusion of the beam. In this work, we present a modeling of numerical noise induced by PIC algorithms, and discuss its influence on particle dynamics. The combined effect of particle tracking and noise created by PIC algorithms leads to correlated or decorrelated numerical noise. For decorrelated numerical noise we derive a scaling law for the simulation parameters, allowing an estimate of artificial emittance growth. Lastly, the effect of correlated numerical noise is discussed, and a mitigation strategy is proposed.

  6. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  7. Spatial search algorithms on Hanoi networks

    NASA Astrophysics Data System (ADS)

    Marquezino, Franklin de Lima; Portugal, Renato; Boettcher, Stefan

    2013-01-01

    We use the abstract search algorithm and its extension due to Tulsi to analyze a spatial quantum search algorithm that finds a marked vertex in Hanoi networks of degree 4 faster than classical algorithms. We also analyze the effect of using non-Groverian coins that take advantage of the small-world structure of the Hanoi networks. We obtain the scaling of the total cost of the algorithm as a function of the number of vertices. We show that Tulsi's technique plays an important role to speed up the searching algorithm. We can improve the algorithm's efficiency by choosing a non-Groverian coin if we do not implement Tulsi's method. Our conclusions are based on numerical implementations.

  8. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  9. Numerical vorticity creation based on impulse conservation.

    PubMed Central

    Summers, D M; Chorin, A J

    1996-01-01

    The problem of creating solenoidal vortex elements to satisfy no-slip boundary conditions in Lagrangian numerical vortex methods is solved through the use of impulse elements at walls and their subsequent conversion to vortex loops. The algorithm is not uniquely defined, due to the gauge freedom in the definition of impulse; the numerically optimal choice of gauge remains to be determined. Two different choices are discussed, and an application to flow past a sphere is sketched. PMID:11607636

  10. Manufacturing complex silica aerogel target components

    SciTech Connect

    Defriend Obrey, Kimberly Ann; Day, Robert D; Espinoza, Brent F; Hatch, Doug; Patterson, Brian M; Feng, Shihai

    2008-01-01

    Aerogel is a material used in numerous components in High Energy Density Physics targets. In the past these components were molded into the proper shapes. Artifacts left in the parts from the molding process, such as contour irregularities from shrinkage and density gradients caused by the skin, have caused LANL to pursue machining as a way to make the components.

  11. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  12. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  13. Born approximation, scattering, and algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun

    2015-05-01

    In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

  14. Principle component analysis for radiotracer signal separation.

    PubMed

    Kasban, H; Arafa, H; Elaraby, S M S

    2016-06-01

    Radiotracers can be used in several industrial applications by injecting the radiotracer into the industrial system and monitoring the radiation using radiation detectors for obtaining signals. These signals are analyzed to obtain indications about what is happening within the system or to determine the problems that may be present in the system. For multi-phase system analysis, more than one radiotracer is used and the result is a mixture of radiotracers signals. The problem is in such cases is how to separate these signals from each other. The paper presents a proposed method based on Principle Component Analysis (PCA) for separating mixed two radiotracer signals from each other. Two different radiotracers (Technetium-99m (Tc(99m)) and Barium-137m (Ba(137m))) were injected into a physical model for simulation of chemical reactor (PMSCR-MK2) for obtaining the radiotracer signals using radiation detectors and Data Acquisition System (DAS). The radiotracer signals are mixed and signal processing steps are performed include background correction and signal de-noising, then applying the signal separation algorithms. Three separation algorithms have been carried out; time domain based separation algorithm, Independent Component Analysis (ICA) based separation algorithm, and Principal Components Analysis (PCA) based separation algorithm. The results proved the superiority of the PCA based separation algorithm to the other based separation algorithm, and PCA based separation algorithm and the signal processing steps gives a considerable improvement of the separation process. PMID:26974488

  15. Numerical propagator through PIAA optics

    NASA Astrophysics Data System (ADS)

    Pueyo, Laurent; Shaklan, Stuart; Give'On, Amir; Krist, John

    2009-08-01

    In this communication we address two outstanding issues pertaining the modeling of PIAA coronagraphs, accurate numerical propagation of edge effects and fast propagation of mid spatial frequencies for wavefront control. In order to solve them, we first derive a quadratic approximation of the Huygens wavelets that allows us to develop an angular spectrum propagator for pupil remapping. Using this result we introduce an independent method to verify the ultimate contrast floor, due to edge propagation effects, of PIAA units currently being tested in various testbeds. We then delve into the details of a novel fast algorithm, based on the recognition that angular spectrum computations with a pre-apodised system are computationally light. When used for the propagation of mid spatial frequencies, such a fast propagator will ultimately allow us to develop robust wavefront control algorithms with DMs located before the pupil remapping mirrors.

  16. The value of care algorithms.

    PubMed

    Myers, Timothy

    2006-09-01

    The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065

  17. Low-complexity optical phase noise suppression in CO-OFDM system using recursive principal components elimination.

    PubMed

    Hong, Xiaojian; Hong, Xuezhi; He, Sailing

    2015-09-01

    A low-complexity optical phase noise suppression approach based on recursive principal components elimination, R-PCE, is proposed and theoretically derived for CO-OFDM systems. Through frequency domain principal components estimation and elimination, signal distortion caused by optical phase noise is mitigated by R-PCE. Since matrix inversion and domain transformation are completely avoided, compared with the case of the orthogonal basis expansion algorithm (L = 3) that offers a similar laser linewidth tolerance, the computational complexities of multiple principal components estimation are drastically reduced in the R-PCE by factors of about 7 and 5 for q = 3 and 4, respectively. The feasibility of optical phase noise suppression with the R-PCE and its decision-aided version (DA-R-PCE) in the QPSK/16QAM CO-OFDM system are demonstrated by Monte-Carlo simulations, which verify that R-PCE with only a few number of principal components q ( = 3) provides a significantly larger laser linewidth tolerance than conventional algorithms, including the common phase error compensation algorithm and linear interpolation algorithm. Numerical results show that the optimal performance of R-PCE and DA-R-PCE can be achieved with a moderate q, which is beneficial for low-complexity hardware implementation. PMID:26368499

  18. Numerical study of a quasi-hydrodynamic system of equations for flow computation at small mach numbers

    NASA Astrophysics Data System (ADS)

    Balashov, V. A.; Savenkov, E. B.

    2015-10-01

    The applicability of numerical algorithms based on a quasi-hydrodynamic system of equations for computing viscous heat-conducting compressible gas flows at Mach numbers M = 10-2-10-1 is studied numerically. The numerical algorithm is briefly described, and the results obtained for a number of two- and three-dimensional test problems are presented and compared with earlier numerical data.

  19. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    NASA Astrophysics Data System (ADS)

    Lehe, Rémi; Kirchen, Manuel; Andriyash, Igor A.; Godfrey, Brendan B.; Vay, Jean-Luc

    2016-06-01

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  20. Disruptive Innovation in Numerical Hydrodynamics

    SciTech Connect

    Waltz, Jacob I.

    2012-09-06

    We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.

  1. A Hybrid Parallel Preconditioning Algorithm For CFD

    NASA Technical Reports Server (NTRS)

    Barth,Timothy J.; Tang, Wei-Pai; Kwak, Dochan (Technical Monitor)

    1995-01-01

    A new hybrid preconditioning algorithm will be presented which combines the favorable attributes of incomplete lower-upper (ILU) factorization with the favorable attributes of the approximate inverse method recently advocated by numerous researchers. The quality of the preconditioner is adjustable and can be increased at the cost of additional computation while at the same time the storage required is roughly constant and approximately equal to the storage required for the original matrix. In addition, the preconditioning algorithm suggests an efficient and natural parallel implementation with reduced communication. Sample calculations will be presented for the numerical solution of multi-dimensional advection-diffusion equations. The matrix solver has also been embedded into a Newton algorithm for solving the nonlinear Euler and Navier-Stokes equations governing compressible flow. The full paper will show numerous examples in CFD to demonstrate the efficiency and robustness of the method.

  2. Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization.

    PubMed

    Chen, Ying-ping; Chen, Chao-Hong

    2010-01-01

    An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence. PMID:20210600

  3. An efficient algorithm for geocentric to geodetic coordinate conversion

    SciTech Connect

    Toms, R.M.

    1995-09-01

    The problem of performing transformations from geocentric to geodetic coordinates has received an inordinate amount of attention in the literature. Numerous approximate methods have been published. Almost none of the publications address the issue of efficiency and in most cases there is a paucity of error analysis. Recently there has been a surge of interest in this problem aimed at developing more efficient methods for real time applications such as DIS. Iterative algorithms have been proposed that are not of optimal efficiency, address only one error component and require a small but uncertain number of relatively expensive iterations for convergence. In this paper a well known rapidly convergent iterative approach is modified to eliminate intervening trigonometric function evaluations. A total error metric is defined that accounts for both angular and altitude errors. The initial guess is optimized to minimize the error for one iteration. The resulting algorithm yields transformations correct to one centimeter for altitudes out to one million kilometers. Due to the rapid convergence only one iteration is used and no stopping test is needed. This algorithm is discussed in the context of machines that have FPUs and legacy machines that utilize mathematical subroutine packages.

  4. Numerical Simulation of Time-Dependent Wave Propagation Using Nonreflective Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Ionescu, D.; Muehlhaus, H.

    2003-12-01

    Solving numerically the wave equation for modelling wave propagation on an unbounded domain with complex geometry requires a truncation of the domain, to fit the infinite region on a finite computer. Minimizing the amount of spurious reflections requires in many cases the introduction of an artificial boundary and of associated nonreflecting boundary conditions. Here, a question arises, namely which boundary condition guarantees that the solution of the time dependent problem inside the artificial boundary coincides with the solution of the original problem in the infinite region. Recent investigations have shown that the accuracy and performance of numerical algorithms and the interpretation of the results critically depend on the proper treatment of external boundaries. Despite the computational speed of finite difference schemes and the robustness of finite elements in handling complex geometries the resulting numerical error consists of two independent contributions: the discretization error of the numerical method used and the spurious reflection generated at the artificial boundary. This spurious contribution travels back and substantially degrades the accuracy of the solution everywhere in the computational domain. Unless both error components are reduced systematically, the numerical solution does not converge to the solution of the original problem in the infinite region. In the present study we present and discuss absorbing boundary condition techniques for the time-dependent scalar wave equation in three spatial dimensions. In particular, exact conditions that annihilate wave harmonics on a spherical artificial boundary up to a given order are obtained and subsequently applied in numerical simulations by employing a finite differences implementation.

  5. Trial encoding algorithms ensemble.

    PubMed

    Cheng, Lipin Bill; Yeh, Ren Jye

    2013-01-01

    This paper proposes trial algorithms for some basic components in cryptography and lossless bit compression. The symmetric encryption is accomplished by mixing up randomizations and scrambling with hashing of the key playing an essential role. The digital signature is adapted from the Hill cipher with the verification key matrices incorporating un-invertible parts to hide the signature matrix. The hash is a straight running summation (addition chain) of data bytes plus some randomization. One simplified version can be burst error correcting code. The lossless bit compressor is the Shannon-Fano coding that is less optimal than the later Huffman and Arithmetic coding, but can be conveniently implemented without the use of a tree structure and improvable with bytes concatenation. PMID:27057475

  6. Complete solutions of zoom curves of three-component zoom lenses with the second component fixed.

    PubMed

    Chen, Chaohsien

    2014-10-10

    Purely algebraic algorithms are presented for solving the zoom curves of a three-component zoom lens of which the second component is fixed on zooming. Two separated algorithms for infinite and finite conjugate imaging conditions are provided. For the infinite-conjugate condition, the transverse magnifications of the second and third components are solved to match the required system focal length, resulting in solving a quadratic equation. For the finite-conjugate condition, three nonlinear simultaneous equations regarding the system magnification, the object-to-image thickness, and the position of the second component are combined into a fourth-order polynomial equation. The roots can all be directly obtained by simple algebraic calculations. As a result, the proposed algebraic algorithms provide a more efficient and complete method than do earlier algorithms adopting scanning procedures. PMID:25322432

  7. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  8. On Numerical Methods For Hypersonic Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Yee, H. C.; Sjogreen, B.; Shu, C. W.; Wang, W.; Magin, T.; Hadjadj, A.

    2011-05-01

    Proper control of numerical dissipation in numerical methods beyond the standard shock-capturing dissipation at discontinuities is an essential element for accurate and stable simulation of hypersonic turbulent flows, including combustion, and thermal and chemical nonequilibrium flows. Unlike rapidly developing shock interaction flows, turbulence computations involve long time integrations. Improper control of numerical dissipation from one time step to another would be compounded over time, resulting in the smearing of turbulent fluctuations to an unrecognizable form. Hypersonic turbulent flows around re- entry space vehicles involve mixed steady strong shocks and turbulence with unsteady shocklets that pose added computational challenges. Stiffness of the source terms and material mixing in combustion pose yet other types of numerical challenges. A low dissipative high order well- balanced scheme, which can preserve certain non-trivial steady solutions of the governing equations exactly, may help minimize some of these difficulties. For stiff reactions it is well known that the wrong propagation speed of discontinuities occurs due to the under-resolved numerical solutions in both space and time. Schemes to improve the wrong propagation speed of discontinuities for systems of stiff reacting flows remain a challenge for algorithm development. Some of the recent algorithm developments for direct numerical simulations (DNS) and large eddy simulations (LES) for the subject physics, including the aforementioned numerical challenges, will be discussed.

  9. Ten years of Nature Physics: Numerical models come of age

    NASA Astrophysics Data System (ADS)

    Gull, E.; Millis, A. J.

    2015-10-01

    When Nature Physics celebrated 20 years of high-temperature superconductors, numerical approaches were on the periphery. Since then, new ideas implemented in new algorithms are leading to new insights.

  10. Extremal polynomials and methods of optimization of numerical algorithms

    SciTech Connect

    Lebedev, V I

    2004-10-31

    Chebyshev-Markov-Bernstein-Szegoe polynomials C{sub n}(x) extremal on [-1,1] with weight functions w(x)=(1+x){sup {alpha}}(1- x){sup {beta}}/{radical}(S{sub l}(x)) where {alpha},{beta}=0,1/2 and S{sub l}(x)={pi}{sub k=1}{sup m}(1-c{sub k}T{sub l{sub k}}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w{sup 2}(x)(1-x{sup 2}){sup -1/2}. The parameters of optimal Chebyshev iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshev filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

  11. Extremal polynomials and methods of optimization of numerical algorithms

    NASA Astrophysics Data System (ADS)

    Lebedev, V. I.

    2004-10-01

    Chebyshëv-Markov-Bernstein-Szegö polynomials C_n(x) extremal on \\lbrack -1,1 \\rbrack with weight functions w(x)=(1+x)^\\alpha(1- x)^\\beta/\\sqrt{S_l(x)} where \\alpha,\\beta=0,\\frac12 and S_l(x)=\\prod_{k=1}^m(1-c_kT_{l_k}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w^2(x)(1-x^2)^{-1/2}. The parameters of optimal Chebyshëv iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshëv filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

  12. Numerical algorithms for finite element computations on arrays of microprocessors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1981-01-01

    The development of a multicolored successive over relaxation (SOR) program for the finite element machine is discussed. The multicolored SOR method uses a generalization of the classical Red/Black grid point ordering for the SOR method. These multicolored orderings have the advantage of allowing the SOR method to be implemented as a Jacobi method, which is ideal for arrays of processors, but still enjoy the greater rate of convergence of the SOR method. The program solves a general second order self adjoint elliptic problem on a square region with Dirichlet boundary conditions, discretized by quadratic elements on triangular regions. For this general problem and discretization, six colors are necessary for the multicolored method to operate efficiently. The specific problem that was solved using the six color program was Poisson's equation; for Poisson's equation, three colors are necessary but six may be used. In general, the number of colors needed is a function of the differential equation, the region and boundary conditions, and the particular finite element used for the discretization.

  13. NASARC - NUMERICAL ARC SEGMENTATION ALGORITHM FOR A RADIO CONFERENCE

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.

    1994-01-01

    NASARC was developed from the general planning principles and decisions of both sessions of the World Administrative Radio Conference on the Use of the Geostationary Satellite Orbit and on the Planning of Space Services Utilizing It (WARC-85, WARC-88). NASARC was written to help countries satisfy requirements for nation-wide Fixed Satellite services from at least one orbital position within a predetermined arc. The NASARC-generated predetermined arcs are each based on a common arc segment visible to a group of compatible service areas, and provide a means of generating a highly flexible allotment plan with a reduced need for coordination among administrations. The selection of particular groupings of service areas and their associated predetermined arcs is made according to a heuristic approach using several figures of merit designed to confront the most difficult allotment problems. NASARC attempts to select groupings and predetermined arc sizes so that the requirements of all administrations are met before the available orbital arc is exhausted. The predetermined arcs allow considerable freedom of choice in the positioning of space stations for all members of any grouping. The approach to allotment planning for which NASARC was designed consists of two phases. The first is the use of NASARC to identify predetermined arc segments common to groups of administrations. Those administrations within a group and sharing a common predetermined arc segment would be able to position their individual space stations at any one of a number of orbital positions within the predetermined arc. The second phase involves the use of a plan synthesis program (such as the ORBIT program resident at the International Frequency Registration Board in Geneva, Switzerland) to identify example scenarios of specific space station placements. NASARC software is modular, and consists of several programs to be run in sequence. The grouping module, NASARC1, identifies compatible groups of several service areas that are sufficiently separated geographically so that co-location or near co-location of their space stations will permit a user-specified downlink performance criterion to be satisfied. Pairwise compatibility between systems is assessed on the basis of the satellite separation required to meet this criterion. NASARC2 examines all groups of compatible administrations with their corresponding arc segments and computes a common predetermined arc. After an orbital slot of sufficient size has been found, NASARC2 calculates the required orbital separation between the critical group and its potential east and west neighbors and determines predetermined arc placement accordingly. NASARC3 updates and extends the feasible orbital locations for predetermined arcs associated with compatible groups of service areas to provide flexibility for rearrangement if necessary. NASARC4 performs rearrangement of predetermined arc segments where rearrangement will provide increased total arc available for subsequent placement of additional predetermined arcs and produces the final output report of the NASARC package. In addition to planning assumed homogeneous systems, NASARC can take into account such factors as rain attenuation, individual antenna parameters, power calculation options, minimum power values, different required carrier-to-interface ratios, variable grouping criteria, and affiliated sets of service areas. The modules allow the baseline assumptions to be modified, some on an individual service area basis. NASARC array dimensions have been structured to fit within the currently available 12MB memory capacity of the International Frequency Registration Board computer facility. NASARC was written in ANSI standard FORTRAN 77 and developed on an AMDAHL 5860 running under the IBM VM operating system. The package requires 8.1MB of central memory. NASARC (version 4.0) was written in 1988. IBM and VM are registered trademarks of International Business Machines. AMDAHL 5860 is a trademark of Amdahl Corporation.

  14. Numerical Arc-Segmentation Algorithm For A Radio Conference

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Ponchak, Denise S.; Heyward, A. O.; Zuzek, John E.; Spence, R. L.

    1992-01-01

    NASARC computer program developed from general planning principles and decisions of both sessions of World Administrative Radio Conference on Use of Geostationary Satellite Orbit and on Planning of Space Services Utilizing It (WARC-85 and WARC-88). Written to help countries satisfy requirements for nationwide fixed-satellite services from at least one orbital position within predetermined arc. Written in ANSI standard FORTRAN 77.

  15. Evaluating numerical ODE/DAE methods, algorithms and software

    NASA Astrophysics Data System (ADS)

    Soderlind, Gustaf; Wang, Lina

    2006-01-01

    Until recently, the testing of ODE/DAE software has been limited to simple comparisons and benchmarking. The process of developing software from a mathematically specified method is complex: it entails constructing control structures and objectives, selecting iterative methods and termination criteria, choosing norms and many more decisions. Most software constructors have taken a heuristic approach to these design choices, and as a consequence two different implementations of the same method may show significant differences in performance. Yet it is common to try to deduce from software comparisons that one method is better than another. Such conclusions are not warranted, however, unless the testing is carried out under true ceteris paribus conditions. Moreover, testing is an empirical science and as such requires a formal test protocol; without it conclusions are questionable, invalid or even false.We argue that ODE/DAE software can be constructed and analyzed by proven, "standard" scientific techniques instead of heuristics. The goals are computational stability, reproducibility, and improved software quality. We also focus on different error criteria and norms, and discuss modifications to DASPK and RADAU5. Finally, some basic principles of a test protocol are outlined and applied to testing these codes on a variety of problems.

  16. Incompressible viscous flow computations for the pump components and the artificial heart

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin

    1992-01-01

    A finite difference, three dimensional incompressible Navier-Stokes formulation to calculate the flow through turbopump components is utilized. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. Both steady and unsteady flow calculations can be performed using the current algorithm. Here, equations are solved in steadily rotating reference frames by using the steady state formulation in order to simulate the flow through a turbopump inducer. Eddy viscosity is computed by using an algebraic mixing-length turbulence model. Numerical results are compared with experimental measurements and a good agreement is found between the two.

  17. Probabilistic structural analysis methodology and applications to advanced space propulsion system components

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Rajagopal, K. R.; Dias, J. B.

    1990-01-01

    The goal of the reported work is to develop and apply new technology that will enable the designer to efficiently and accurately account for each of the design sources of uncertainty as it might affect structural reliability and risk assessment. The paper discusses the development of the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) finite element code and its supporting reliability algorithms. The NESSUS code and the elements of the solution strategy are outlined and applications are made to several propulsion system components.

  18. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  19. Numerical recipes for mold filling simulation

    SciTech Connect

    Kothe, D.; Juric, D.; Lam, K.; Lally, B.

    1998-07-01

    Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.

  20. NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.

    SciTech Connect

    LUCCIO, A.; D'IMPERIO, N.; MALITSKY, N.

    2005-09-12

    Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.

  1. Comparison of two numerical techniques for aerodynamic model identification

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    An algorithm, called the Minimal Residual QR algorithm, is presented to solve subset regression problems. It is shown that this scheme can be used as a numerically reliable implementation of the stepwise regression technique, which is widely used to identify an aerodynamic model from flight test data. This capability as well as the numerical superiority of this scheme over the stepwise regression technique is demonstrated in an experimental simulation study.

  2. Prognostics for Microgrid Components

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav

    2012-01-01

    Prognostics is the science of predicting future performance and potential failures based on targeted condition monitoring. Moving away from the traditional reliability centric view, prognostics aims at detecting and quantifying the time to impending failures. This advance warning provides the opportunity to take actions that can preserve uptime, reduce cost of damage, or extend the life of the component. The talk will focus on the concepts and basics of prognostics from the viewpoint of condition-based systems health management. Differences with other techniques used in systems health management and philosophies of prognostics used in other domains will be shown. Examples relevant to micro grid systems and subsystems will be used to illustrate various types of prediction scenarios and the resources it take to set up a desired prognostic system. Specifically, the implementation results for power storage and power semiconductor components will demonstrate specific solution approaches of prognostics. The role of constituent elements of prognostics, such as model, prediction algorithms, failure threshold, run-to-failure data, requirements and specifications, and post-prognostic reasoning will be explained. A discussion on performance evaluation and performance metrics will conclude the technical discussion followed by general comments on open research problems and challenges in prognostics.

  3. Clustering of Hadronic Showers with a Structural Algorithm

    SciTech Connect

    Charles, M.J.; /SLAC

    2005-12-13

    The internal structure of hadronic showers can be resolved in a high-granularity calorimeter. This structure is described in terms of simple components and an algorithm for reconstruction of hadronic clusters using these components is presented. Results from applying this algorithm to simulated hadronic Z-pole events in the SiD concept are discussed.

  4. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  5. Detection of Component Failures for Smart Structure Control Systems

    NASA Astrophysics Data System (ADS)

    Okubo, Hiroshi

    Uncertainties in the dynamics model of a smart structure are often of significance due to model errors caused by parameter identification errors and reduced-order modeling of the system. Design of a model-based Failure Detection and Isolation (FDI) system for smart structures, therefore, needs careful consideration regarding robustness with respect to such model uncertainties. In this paper, we proposes a new method of robust fault detection that is insensitive to the disturbances caused by unknown modeling errors while it is highly sensitive to the component failures. The capability of the robust detection algorithm is examined for the sensor failure of a flexible smart beam control system. It is shown by numerical simulations that the proposed method suppresses the disturbances due to model errors and markedly improves the detection performance.

  6. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  7. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  8. Linac Alignment Algorithm: Analysis on 1-to-1 Steering

    SciTech Connect

    Sun, Yipeng; Adolphsen, Chris; /SLAC

    2011-08-19

    In a linear accelerator, it is important to achieve a good alignment between all of its components (such as quadrupoles, RF cavities, beam position monitors et al.), in order to better preserve the beam quality during acceleration. After the survey of the main linac components, there are several beam-based alignment (BBA) techniques to be applied, to further optimize the beam trajectory and calculate the corresponding steering magnets strength. Among these techniques the most simple and straightforward one is the one-to-one (1-to-1) steering technique, which steers the beam from quad center to center, and removes the betatron oscillation from quad focusing. For a future linear collider such as the International Linear Collider (ILC), the initial beam emittance is very small in the vertical plane (flat beam with {gamma}{epsilon}{sub y} = 20-40nm), which means the alignment requirement is very tight. In this note, we evaluate the emittance growth with one-to-one correction algorithm employed, both analytically and numerically. Then the ILC main linac accelerator is taken as an example to compare the vertical emittance growth after 1-to-1 steering, both from analytical formulae and multi-particle tracking simulation. It is demonstrated that the estimated emittance growth from the derived formulae agrees well with the results from numerical simulation, with and without acceleration, respectively.

  9. Translation and integration of numerical atomic orbitals in linear molecules.

    PubMed

    Heinäsmäki, Sami

    2014-02-14

    We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively. PMID:24527905

  10. Translation and integration of numerical atomic orbitals in linear molecules

    NASA Astrophysics Data System (ADS)

    Heinäsmäki, Sami

    2014-02-01

    We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.

  11. An algorithm based on carrier squeezing interferometry for multi-beam phase extraction in Fizeau interferometer

    NASA Astrophysics Data System (ADS)

    Cheng, Jinlong; Gao, Zhishan; Wang, Kailiang; Yang, Zhongming; Wang, Shuai; Yuan, Qun

    2015-10-01

    Multi-beam interference will exist in the cavity of Fizeau interferometer due to the high reflectivity of test optics. The random phase shift error will be generated by some factors such as the environmental vibration, air turbulence, etc. Both these will cause phase retrieving error. We proposed a non-iterative approach called Carrier Squeezing Multi-beam Interferometry (CSMI) algorithm, which is based on the Carrier squeezing interferometry (CSI) technique to retrieve the phase distribution from multiple-beam interferograms with random phase shift errors. The intensity of multiple-beam interference was decomposed into fundamental wave and high-order harmonics, by using the Fourier series expansion. Multi-beam phase shifting interferograms with linear carrier were rearranged by row or column, to fuse one frame of spatial-temporal fringes. The lobe of the fundamental component related to the phase and the lobes of high-order harmonics and phase shift errors were separated in the frequency domain, so the correct phase was extracted by filtering out the fundamental component. Suppression of the influence from high-order harmonic components, as well as random phase shift error is validated by numerical simulations. Experiments were also executed by using the proposed CSMI algorithm for mirror with high reflection coefficient, showing its advantage comparing with normal phase retrieving algorithms.

  12. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  13. Fast Algorithms for Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  14. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  15. A spectral canonical electrostatic algorithm

    NASA Astrophysics Data System (ADS)

    Webb, Stephen D.

    2016-03-01

    Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton’s principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm’s energy- and momentum-conserving properties.

  16. Content-weighted video quality assessment using a three-component image model

    NASA Astrophysics Data System (ADS)

    Li, Chaofeng; Bovik, Alan Conrad

    2010-01-01

    Objective image and video quality measures play important roles in numerous image and video processing applications. In this work, we propose a new content-weighted method for full-reference (FR) video quality assessment using a three-component image model. Using the idea that different image regions have different perceptual significance relative to quality, we deploy a model that classifies image local regions according to their image gradient properties, then apply variable weights to structural similarity image index (SSIM) [and peak signal-to-noise ratio (PSNR)] scores according to region. A frame-based video quality assessment algorithm is thereby derived. Experimental results on the Video Quality Experts Group (VQEG) FR-TV Phase 1 test dataset show that the proposed algorithm outperforms existing video quality assessment methods.

  17. Wavelet Algorithms for Illumination Computations

    NASA Astrophysics Data System (ADS)

    Schroder, Peter

    One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.

  18. A Collaborative Recommend Algorithm Based on Bipartite Community

    PubMed Central

    Fu, Yuchen; Liu, Quan; Cui, Zhiming

    2014-01-01

    The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393

  19. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  20. Numerical simulation of steady supersonic flow. [spatial marching

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.; Steger, J. L.

    1981-01-01

    A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.

  1. Distilling the Verification Process for Prognostics Algorithms

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai

    2013-01-01

    The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.

  2. Small Body GN&C Research Report: A Robust Model Predictive Control Algorithm with Guaranteed Resolvability

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet A.; Carson, John M., III

    2005-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.

  3. A robust model predictive control algorithm for uncertain nonlinear systems that guarantees resolvability

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Carson, John M., III

    2006-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.

  4. A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit

    NASA Astrophysics Data System (ADS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.

  5. A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1989-01-01

    Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.

  6. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  7. A quasi-Monte Carlo Metropolis algorithm

    PubMed Central

    Owen, Art B.; Tribble, Seth D.

    2005-01-01

    This work presents a version of the Metropolis–Hastings algorithm using quasi-Monte Carlo inputs. We prove that the method yields consistent estimates in some problems with finite state spaces and completely uniformly distributed inputs. In some numerical examples, the proposed method is much more accurate than ordinary Metropolis–Hastings sampling. PMID:15956207

  8. A Low-Stress Algorithm for Fractions

    ERIC Educational Resources Information Center

    Ruais, Ronald W.

    1978-01-01

    An algorithm is given for the addition and subtraction of fractions based on dividing the sum of diagonal numerator and denominator products by the product of the denominators. As an explanation of the teaching method, activities used in teaching are demonstrated. (MN)

  9. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  10. Mathematical and computer modeling of component surface shaping

    NASA Astrophysics Data System (ADS)

    Lyashkov, A.

    2016-04-01

    The process of shaping technical surfaces is an interaction of a tool (a shape element) and a component (a formable element or a workpiece) in their relative movements. It was established that the main objects of formation are: 1) a discriminant of a surfaces family, formed by the movement of the shape element relatively the workpiece; 2) an enveloping model of the real component surface obtained after machining, including transition curves and undercut lines; 3) The model of cut-off layers obtained in the process of shaping. When modeling shaping objects there are a lot of insufficiently solved or unsolved issues that make up a single scientific problem - a problem of qualitative shaping of the surface of the tool and then the component surface produced by this tool. The improvement of known metal-cutting tools, intensive development of systems of their computer-aided design requires further improvement of the methods of shaping the mating surfaces. In this regard, an important role is played by the study of the processes of shaping of technical surfaces with the use of the positive aspects of analytical and numerical mathematical methods and techniques associated with the use of mathematical and computer modeling. The author of the paper has posed and has solved the problem of development of mathematical, geometric and algorithmic support of computer-aided design of cutting tools based on computer simulation of the shaping process of surfaces.

  11. Numerical Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An overview of historical and current numerical aerodynamic simulation (NAS) is given. The capabilities and goals of the Numerical Aerodynamic Simulation Facility are outlined. Emphasis is given to numerical flow visualization and its applications to structural analysis of aircraft and spacecraft bodies. The uses of NAS in computational chemistry, engine design, and galactic evolution are mentioned.

  12. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  13. Data driven components in a model of inner shelf sorted bedforms: a new hybrid model

    NASA Astrophysics Data System (ADS)

    Goldstein, E. B.; Coco, G.; Murray, A. B.; Green, M. O.

    2013-10-01

    Numerical models rely on the parameterization of processes that often lack a deterministic description. In this contribution we demonstrate the applicability of using machine learning, optimization tools from the discipline of computer science, to develop parameterizations when extensive data sets exist. We develop a new predictor for near bed suspended sediment reference concentration under unbroken waves using genetic programming, a machine learning technique. This newly developed parameterization performs better than existing empirical predictors. We add this new predictor into an established model for inner shelf sorted bedforms. Additionally we incorporate a previously reported machine learning derived predictor for oscillatory flow ripples into the sorted bedform model. This new "hybrid" sorted bedform model, whereby machine learning components are integrated into a numerical model, demonstrates a method of incorporating observational data (filtered through a machine learning algorithm) directly into a numerical model. Results suggest that the new hybrid model is able to capture dynamics previously absent from the model, specifically, the two observed pattern modes of sorted bedforms. However, caveats exist when data driven components do not have parity with traditional theoretical components of morphodynamic models, and we discuss the challenges of integrating these disparate pieces and the future of this type of modeling.

  14. An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP

    NASA Astrophysics Data System (ADS)

    Moncet, J. L.

    2015-12-01

    We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from

  15. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  16. Driven one-component plasmas

    SciTech Connect

    Rizzato, Felipe B.; Pakter, Renato; Levin, Yan

    2009-08-15

    A statistical theory is presented that allows the calculation of the stationary state achieved by a driven one-component plasma after a process of collisionless relaxation. The stationary Vlasov equation with appropriate boundary conditions is reduced to an ordinary differential equation, which is then solved numerically. The solution is then compared with the molecular-dynamics simulation. A perfect agreement is found between the theory and the simulations. The full current-voltage phase diagram is constructed.

  17. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-08-30

    We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  18. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-02-28

    We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  19. Algorithms for computing the multivariable stability margin

    NASA Technical Reports Server (NTRS)

    Tekawy, Jonathan A.; Safonov, Michael G.; Chiang, Richard Y.

    1989-01-01

    Stability margin for multiloop flight control systems has become a critical issue, especially in highly maneuverable aircraft designs where there are inherent strong cross-couplings between the various feedback control loops. To cope with this issue, we have developed computer algorithms based on non-differentiable optimization theory. These algorithms have been developed for computing the Multivariable Stability Margin (MSM). The MSM of a dynamical system is the size of the smallest structured perturbation in component dynamics that will destabilize the system. These algorithms have been coded and appear to be reliable. As illustrated by examples, they provide the basis for evaluating the robustness and performance of flight control systems.

  20. New analytical algorithm for overlay accuracy

    NASA Astrophysics Data System (ADS)

    Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo

    2012-03-01

    The extension of optical lithography to 2Xnm and beyond is often challenged by overlay control. With reduced overlay measurement error budget in the sub-nm range, conventional Total Measurement Uncertainty (TMU) data is no longer sufficient. Also there is no sufficient criterion in overlay accuracy. In recent years, numerous authors have reported new method of the accuracy of the overlay metrology: Through focus and through color. Still quantifying uncertainty in overlay measurement is most difficult work in overlay metrology. According to the ITRS roadmap, total overlay budget is getting tighter than former device node as a design rule shrink on each device node. Conventionally, the total overlay budget is defined as the square root of square sum of the following contributions: the scanner overlay performance, wafer process, metrology and mask registration. All components have been supplying sufficiently performance tool to each device nodes, delivering new scanner, new metrology tools, and new mask e-beam writers. Especially the scanner overlay performance was drastically decreased from 9nm in 8x node to 2.5nm in 3x node. The scanner overlay seems to reach the limitation the overlay performance after 3x nod. The importance of the wafer process overlay as a contribution of total wafer overlay became more important. In fact, the wafer process overlay was decreased by 3nm between DRAM 8x node and DRAM 3x node. We develop an analytical algorithm for overlay accuracy. And a concept of nondestructive method is proposed in this paper. For on product layer we discovered the layer has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. In this paper, authors suggest an analytical algorithm for overlay accuracy. And a concept of non-destructive method is proposed in this paper. For on product layers, we discovered it has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. Furthermore

  1. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  2. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  3. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  4. Numerical stability of pseudo-spectral PIC code generalizations

    NASA Astrophysics Data System (ADS)

    Godfrey, Brendan B.; Vay, Jean-Luc

    2014-10-01

    Laser Plasma Accelerator (LPA) particle-in-cell (PIC) simulations are computationally demanding, because they require beam transport over times and distances long compared with the natural scales of the acceleration mechanism and because they are prone to numerical instabilities. To provide greater flexibility in LPA PIC simulations, we have generalized the Pseudo-Spectral Time Domain (PSTD) algorithm to accommodate arbitrary order spatial derivative approximations and substantially longer time steps. Here, we show that, by extending approaches developed by us for other PIC algorithms, numerical Cherenkov instabilities can be suppressed for the generalized PSTD algorithm. We also illustrate the relationships between the generalized PSTD and other PIC algorithms, such as Finite Difference Time Domain (FDTD) and Pseudo-Spectral Analytical Time Domain (PSATD) algorithms. Background information can be found at http://hifweb.lbl.gov/public/BLAST/Godfrey/. Work supported in part by DOE under Contract DE-AC02-05CH11231.

  5. Numerical integration of ordinary differential equations on manifolds

    NASA Astrophysics Data System (ADS)

    Crouch, P. E.; Grossman, R.

    1993-12-01

    This paper is concerned with the problem of developing numerical integration algorithms for differential equations that, when viewed as equations in some Euclidean space, naturally evolve on some embedded submanifold. It is desired to construct algorithms whose iterates also evolve on the same manifold. These algorithms can therefore be viewed as integrating ordinary differential equations on manifolds. The basic method “decouples” the computation of flows on the submanifold from the numerical integration process. It is shown that two classes of single-step and multistep algorithms can be posed and analyzed theoretically, using the concept of “freezing” the coefficients of differential operators obtained from the defining vector field. Explicit third-order algorithms are derived, with additional equations augmenting those of their classical counterparts, obtained from “obstructions” defined by nonvanishing Lie brackets.

  6. Comparision of algorithms for incoming atmospheric long-wave radiation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    While numerous algorithms exist for predicting incident atmospheric long-wave radiation under clear (Lclr) and cloudy skies, only a handful of comparisons have been published to assess the accuracy of the different algorithms. Virtually no comparisons have been made for both clear and cloudy skies ...

  7. Parallel LU-factorization algorithms for dense matrices

    SciTech Connect

    Oppe, T.C.; Kincaid, D.R.

    1987-05-01

    Several serial and parallel algorithms for computing the LU-factorization of a dense matrix are investigated. Numerical experiments and programming considerations to reduce bank conflicts on the Cray X-MP4 parallel computer are presented. Speedup factors are given for the parallel algorithms. 15 refs., 6 tabs.

  8. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  9. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  10. Numerical Based Linear Model for Dipole Magnets

    SciTech Connect

    Li,Y.; Krinsky, S.; Rehak, M.

    2009-05-04

    In this paper, we discuss an algorithm for constructing a numerical linear optics model for dipole magnets from a 3D field map. The difference between the numerical model and K. Brown's analytic approach is investigated and clarified. It was found that the optics distortion due to the dipoles' fringe focusing must be properly taken into account to accurately determine the chromaticities. In NSLS-II, there are normal dipoles with 35-mm gap and dipoles for infrared sources with 90-mm gap. This linear model of the dipole magnets is applied to the NSLS-II lattice design to match optics parameters between the DBA cells having dipoles with different gaps.

  11. Numerical Studies of Collisionless Current Layers

    NASA Technical Reports Server (NTRS)

    Quest, Kevin B.

    1993-01-01

    The purpose of this proposal was to investigate collisionless current layers using a variety of analytic and numerical tools. The first year of the contract was dedicated to analytical studies, to the porting and adaption of codes being used in this study, and to the numerical simulation of collisionless current layers. The second year entailed the development of multi-dimensional hybrid algorithms as well as the re-examination of the problem of integro-differential equations that occur in the linear stage of plasma instabilities.

  12. User`s guide for the frequency domain algorithms in the LIFE2 fatigue analysis code

    SciTech Connect

    Sutherland, H.J.; Linker, R.L.

    1993-10-01

    The LIFE2 computer code is a fatigue/fracture analysis code that is specialized to the analysis of wind turbine components. The numerical formulation of the code uses a series of cycle count matrices to describe the cyclic stress states imposed upon the turbine. However, many structural analysis techniques yield frequency-domain stress spectra and a large body of experimental loads (stress) data is reported in the frequency domain. To permit the analysis of this class of data, a Fourier analysis is used to transform a frequency-domain spectrum to an equivalent time series suitable for rainflow counting by other modules in the code. This paper describes the algorithms incorporated into the code and their numerical implementation. Example problems are used to illustrate typical inputs and outputs.

  13. Generalization of the FDTD algorithm for simulations of hydrodynamic nonlinear Drude model

    SciTech Connect

    Liu Jinjie; Brio, Moysey; Zeng Yong; Zakharian, Armis R.; Hoyer, Walter; Koch, Stephan W.; Moloney, Jerome V.

    2010-08-20

    In this paper we present a numerical method for solving a three-dimensional cold-plasma system that describes electron gas dynamics driven by an external electromagnetic wave excitation. The nonlinear Drude dispersion model is derived from the cold-plasma fluid equations and is coupled to the Maxwell's field equations. The Finite-Difference Time-Domain (FDTD) method is applied for solving the Maxwell's equations in conjunction with the time-split semi-implicit numerical method for the nonlinear dispersion and a physics based treatment of the discontinuity of the electric field component normal to the dielectric-metal interface. The application of the proposed algorithm is illustrated by modeling light pulse propagation and second-harmonic generation (SHG) in metallic metamaterials (MMs), showing good agreement between computed and published experimental results.

  14. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  15. Geometric direct search algorithms for image registration.

    PubMed

    Lee, Seok; Choi, Minseok; Kim, Hyungmin; Park, Frank Chongwoo

    2007-09-01

    A widely used approach to image registration involves finding the general linear transformation that maximizes the mutual information between two images, with the transformation being rigid-body [i.e., belonging to SE(3)] or volume-preserving [i.e., belonging to SL(3)]. In this paper, we present coordinate-invariant, geometric versions of the Nelder-Mead optimization algorithm on the groups SL(3), SE(3), and their various subgroups, that are applicable to a wide class of image registration problems. Because the algorithms respect the geometric structure of the underlying groups, they are numerically more stable, and exhibit better convergence properties than existing local coordinate-based algorithms. Experimental results demonstrate the improved convergence properties of our geometric algorithms. PMID:17784595

  16. Implementing Shor's algorithm on Josephson charge qubits

    SciTech Connect

    Vartiainen, Juha J.; Salomaa, Martti M.; Niskanen, Antti O.; Nakahara, Mikio

    2004-07-01

    We investigate the physical implementation of Shor's factorization algorithm on a Josephson charge qubit register. While we pursue a universal method to factor a composite integer of any size, the scheme is demonstrated for the number 21. We consider both the physical and algorithmic requirements for an optimal implementation when only a small number of qubits are available. These aspects of quantum computation are usually the topics of separate research communities; we present a unifying discussion of both of these fundamental features bridging Shor's algorithm to its physical realization using Josephson junction qubits. In order to meet the stringent requirements set by a short decoherence time, we accelerate the algorithm by decomposing the quantum circuit into tailored two- and three-qubit gates and we find their physical realizations through numerical optimization.

  17. Genetic algorithms for the vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2016-06-01

    The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.

  18. Global convergence analysis of a discrete time nonnegative ICA algorithm.

    PubMed

    Ye, Mao

    2006-01-01

    When the independent sources are known to be nonnegative and well-grounded, which means that they have a nonzero pdf in the region of zero, Oja and Plumbley have proposed a "Nonnegative principal component analysis (PCA)" algorithm to separate these positive sources. Generally, it is very difficult to prove the convergence of a discrete-time independent component analysis (ICA) learning algorithm. However, by using the skew-symmetry property of this discrete-time "Nonnegative PCA" algorithm, if the learning rate satisfies suitable condition, the global convergence of this discrete-time algorithm can be proven. Simulation results are employed to further illustrate the advantages of this theory. PMID:16526495

  19. Machine learning algorithms for damage detection: Kernel-based approaches

    NASA Astrophysics Data System (ADS)

    Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.

    2016-02-01

    This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.

  20. Numerical experiments of fracture-induced velocity and attenuation anisotropy

    NASA Astrophysics Data System (ADS)

    Carcione, J. M.; Picotti, S.; Santos, J. E.

    2012-12-01

    Fractures are common in the Earth's crust due to different factors, for instance, tectonic stresses and natural or artificial hydraulic fracturing caused by a pressurized fluid. A dense set of fractures behaves as an effective long-wavelength anisotropic medium, leading to azimuthally varying velocity and attenuation of seismic waves. Effective in this case means that the predominant wavelength is much longer than the fracture spacing. Here, fractures are represented by surface discontinuities in the displacement u and particle velocity v as ?, where the brackets denote the discontinuity across the surface, ? is a fracture stiffness and ? is a fracture viscosity. We consider an isotropic background medium, where a set of fractures are embedded. There exists an analytical solution—with five stiffness components—for equispaced plane fractures and an homogeneous background medium. The theory predicts that the equivalent medium is transversely isotropic and viscoelastic. We then perform harmonic numerical experiments to compute the stiffness components as a function of frequency, by using a Galerkin finite-element procedure, and obtain the complex velocities of the medium as a function of frequency and propagation direction, which provide the phase velocities, energy velocities (wavefronts) and quality factors. The algorithm is tested with the analytical solution and then used to obtain the stiffness components for general heterogeneous cases, where fractal variations of the fracture compliances and background stiffnesses are considered.

  1. Numerical Simulation of a Convective Turbulence Encounter

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.; Bowles, Roland L.

    2002-01-01

    A numerical simulation of a convective turbulence event is investigated and compared with observational data. The numerical results show severe turbulence of similar scale and intensity to that encountered during the test flight. This turbulence is associated with buoyant plumes that penetrate the upper-level thunderstorm outflow. The simulated radar reflectivity compares well with that obtained from the aircraft's onboard radar. Resolved scales of motion as small as 50 m are needed in order to accurately diagnose aircraft normal load accelerations. Given this requirement, realistic turbulence fields may be created by merging subgrid-scales of turbulence to a convective-cloud simulation. A hazard algorithm for use with model data sets is demonstrated. The algorithm diagnoses the RMS normal loads from second moments of the vertical velocity field and is independent of aircraft motion.

  2. Numerical Modeling of Ocean Acoustic Wavefields

    NASA Astrophysics Data System (ADS)

    Tappert, Frederick

    1997-08-01

    The U.S. Navy requires real-time ``acoustic performance prediction'' models in order to optimize sonar tactics in naval combat situations. The need for numerical models that solve the acoustic wave equation in realistic ocean environments is being met by a collaborative effort between university researchers, industrial contractors, and navy laboratory workers. This paper discusses one particularly successful numerical model, called the PE/SSF model, that was originally developed by the author. Here PE stands for Parabolic Equation, a good approximation to the elliptic Helmholtz equation; and SSF stands for the Split-Step Fourier algorithm, a highly efficient marching algorithm for solving parabolic type equations. These techniques are analyzed, and examples are displayed of ocean acoustic wavefields generated by the PE/SSF model.

  3. Numerical Modeling of Shear Bands and Dynamic Fracture in Metals

    NASA Astrophysics Data System (ADS)

    McAuliffe, Colin James

    Understanding the failure of metals at high strain rate is of utmost importance in the design of a broad range of engineering systems. Numerical methods offer the ability to analyze such complex physics and aid the design of structural systems. The objective of this research will be to develop reliable finite element models for high strain rate failure modelling, incorporating shear bands and fracture. Shear band modelling is explored first, and the subsequent developments are extended to incorporate fracture. Mesh sensitivity, the spurious dependence of failure on the discretization, is a well known hurdle in achieving reliable numerical results for shear bands and fracture, or any other strain softening model. Mesh sensitivity is overcome by regularization, and while details of regularization techniques may differ, all are similar in that a length scale is introduced which serves as a localization limiter. This dissertation contains two main contributions, the first of which presents several developments in shear band modeling. The importance of using a monolithic nonlinear solver in combination with a PDE model accounting for thermal diffusion is demonstrated. In contrast, excluding one or both of these components leads to unreliable numerical results. The Pian-Sumihara stress interpolants are also employed in small and finite deformation and shown to significantly improve the computational cost of shear band modelling. This is partly due to the fact that fewer unknowns than an irreducible discretization result from the same mesh, and more significantly, the fact that convergence of numerical results upon mesh refinement is improved drastically. This means coarser meshes are adequate to resolve shear bands, alleviating some of the computational cost of numerical modelling, which are notoriously significant. Since extremely large deformations are present during shear banding, a mesh to mesh transfer algorithm is presented for the Pian Sumihara element and used as

  4. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  5. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  6. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models

    PubMed Central

    Wise, S.M.; Lowengrub, J.S.; Cristini, V.

    2010-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  7. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models.

    PubMed

    Wise, S M; Lowengrub, J S; Cristini, V

    2011-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  8. Numerical simulation of droplet impact on interfaces

    NASA Astrophysics Data System (ADS)

    Kahouadji, Lyes; Che, Zhizhao; Matar, Omar; Shin, Seungwon; Chergui, Jalel; Juric, Damir

    2015-11-01

    Simulations of three-dimensional droplet impact on interfaces are carried out using BLUE, a massively-parallel code based on a hybrid Front-Tracking/Level-Set algorithm for Lagrangian tracking of arbitrarily deformable phase interfaces. High resolution numerical results show fine details and features of droplet ejection, crown formation and rim instability observed under similar experimental conditions. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

  9. Numerical simulation of in situ bioremediation

    SciTech Connect

    Travis, B.J.

    1998-12-31

    Models that couple subsurface flow and transport with microbial processes are an important tool for assessing the effectiveness of bioremediation in field applications. A numerical algorithm is described that differs from previous in situ bioremediation models in that it includes: both vadose and groundwater zones, unsteady air and water flow, limited nutrients and airborne nutrients, toxicity, cometabolic kinetics, kinetic sorption, subgridscale averaging, pore clogging and protozoan grazing.

  10. Numerical Simulations of Ion Cloud Dynamics

    NASA Astrophysics Data System (ADS)

    Sillitoe, Nicolas; Hilico, Laurent

    We explain how to perform accurate numerical simulations of ion cloud dynamics by discussing the relevant orders of magnitude of the characteristic times and frequencies involved in the problem and the computer requirement with respect to the ion cloud size. We then discuss integration algorithms and Coulomb force parallelization. We finally explain how to take into account collisions, cooling laser interaction and chemical reactions in a Monte Carlo approach and discuss how to use random number generators to that end.

  11. Numerical linear algebra for reconstruction inverse problems

    NASA Astrophysics Data System (ADS)

    Nachaoui, Abdeljalil

    2004-01-01

    Our goal in this paper is to discuss various issues we have encountered in trying to find and implement efficient solvers for a boundary integral equation (BIE) formulation of an iterative method for solving a reconstruction problem. We survey some methods from numerical linear algebra, which are relevant for the solution of this class of inverse problems. We motivate the use of our constructing algorithm, discuss its implementation and mention the use of preconditioned Krylov methods.

  12. Numerical Modeling of Nanoelectronic Devices

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Oyafuso, Fabiano; Bowen, R. Chris; Boykin, Timothy

    2003-01-01

    Nanoelectronic Modeling 3-D (NEMO 3-D) is a computer program for numerical modeling of the electronic structure properties of a semiconductor device that is embodied in a crystal containing as many as 16 million atoms in an arbitrary configuration and that has overall dimensions of the order of tens of nanometers. The underlying mathematical model represents the quantummechanical behavior of the device resolved to the atomistic level of granularity. The system of electrons in the device is represented by a sparse Hamiltonian matrix that contains hundreds of millions of terms. NEMO 3-D solves the matrix equation on a Beowulf-class cluster computer, by use of a parallel-processing matrix vector multiplication algorithm coupled to a Lanczos and/or Rayleigh-Ritz algorithm that solves for eigenvalues. In a recent update of NEMO 3-D, a new strain treatment, parameterized for bulk material properties of GaAs and InAs, was developed for two tight-binding submodels. The utility of the NEMO 3-D was demonstrated in an atomistic analysis of the effects of disorder in alloys and, in particular, in bulk In(x)Ga(l-x)As and in In0.6Ga0.4As quantum dots.

  13. New Algorithms for Large-scale 3D Radiation Transport

    NASA Astrophysics Data System (ADS)

    Lentz, Eric J.

    2009-05-01

    Radiation transport is critical not only for analysis of astrophysical objects but also for the dynamical transport of energy within. Increased fidelity and dimensionality of the other components of such models requires a similar improvement in the radiation transport. Modern astrophysical simulations can be large enough that the values for a single variable for the entire computational domain cannot be stored on a single compute node. The natural solution is to decompose the physical domain into pieces with each node responsible for a single sub-domain. Using localized plus "ghost" zone data works well for problems like explicit hydrodynamics or nuclear reaction networks with modest impact from inter-process communication. Unfortunately, radiation transport is an inherently non-local process that couples the entire model domain together and efficient algorithms are needed to conquer this problem. In this poster, I present the early development of a new parallel, 3-D transport code using ray tracing to formally solve the transport equation across numerically decomposed domains. The algorithm model takes advantage of one-sided communication to develop a scalable, parallel formal solver. Other aspects and future direction of the parallel code development such as scalability and the inclusion of scattering will also be discussed.

  14. Fast ordering algorithm for exact histogram specification.

    PubMed

    Nikolova, Mila; Steidl, Gabriele

    2014-12-01

    This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881

  15. LCD motion blur: modeling, analysis, and algorithm.

    PubMed

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596

  16. ZEUS-2D: A Radiation Magnetohydrodynamics Code for Astrophysical Flows in Two Space Dimensions. II. The Magnetohydrodynamic Algorithms and Tests

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.

  17. Three dimensional measurement of micro-optical components using digital holography and pattern recognition

    NASA Astrophysics Data System (ADS)

    Kim, Do-Hyung; Jeon, Sungbin; Cho, Janghyun; Lim, Geon; Park, No-Cheol; Park, Young-Pil

    2015-09-01

    This paper proposes a method for inspecting transparent micro-optical components that combines digital holography and pattern recognition. As many micro-optical components have array structures with numerous elements, the uniformity of each element is important. Consequently, an effective inspection requires simultaneous measurement of these elements. Pattern recognition is used to solve this issue and can be adopted effectively using the unique characteristics of digital holography to obtain both amplitude and phase information on the object. To verify this approach, an experimental demonstration was performed with a micro-lens array using a circle-detection algorithm based on the Hough Transform. As an experimental results 30 micro-lenses are detected and measured simultaneously by using proposed inspection method.

  18. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  19. Numerical studies of the nonlinear properties of composites

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Stroud, D.

    1994-01-01

    Using both numerical and analytical techniques, we investigate various ways to enhance the cubic nonlinear susceptibility χe of a composite material. We start from the exact relation χe =tsumipiχi<(E.E)2>i,lin/ E40, where χi and pi are the cubic nonlinear susceptibility and volume fraction of the ith component, E0 is the applied electric field, and i,lin is the expectation value of the electric field in the ith component, calculated in the linear limit where χi=0. In our numerical work, we represent the composite by a random resistor or impedance network, calculating the electric-field distributions by a generalized transfer-matrix algorithm. Under certain conditions, we find that χe is greatly enhanced near the percolation threshold. We also find a large enhancement for a linear fractal in a nonlinear host. In a random Drude metal-insulator composite χe is hugely enhanced especially near frequencies which correspond to the surface-plasmon resonance spectrum of the composite. At zero frequency, the random composite results are reasonably well described by a nonlinear effective-medium approximation. The finite-frequency enhancement shows very strong reproducible structure which is nearly undetectable in the linear response of the composite, and which may possibly be described by a generalized nonlinear effective-medium approximation. The fractal results agree qualitatively with a nonlinear differential effective-medium approximation. Finally, we consider a suspension of coated spheres embedded in a host. If the coating is nonlinear, we show that χe/χcoat>>1 near the surface-plasmon resonance frequency of the core particle.

  20. An algorithm for the empirical optimization of antenna arrays

    NASA Technical Reports Server (NTRS)

    Blank, S.

    1983-01-01

    A numerical technique is presented to optimize the performance of arbitrary antenna arrays under realistic conditions. An experimental-computational algorithm is formulated in which n-dimensional minimization methods are applied to measured data obtained from the antenna array. A numerical update formula is used to induce partial derivative information without requiring special perturbations of the array parameters. The algorithm provides a new design for the antenna array, and the method proceeds in an iterative fashion. Test case results are presented showing the effectiveness of the algorithm.