Science.gov

Sample records for algorithm components numerical

  1. A rescaling algorithm for the numerical solution to the porous medium equation in a two-component domain

    NASA Astrophysics Data System (ADS)

    Filo, Ján; Hundertmark-Zaušková, Anna

    2016-10-01

    The aim of this paper is to design a rescaling algorithm for the numerical solution to the system of two porous medium equations defined on two different components of the real line, that are connected by the nonlinear contact condition. The algorithm is based on the self-similarity of solutions on different scales and it presents a space-time adaptable method producing more exact numerical solution in the area of the interface between the components, whereas the number of grid points stays fixed.

  2. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  3. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  4. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  5. Algorithm Reveals Sinusoidal Component Of Noisy Signal

    NASA Technical Reports Server (NTRS)

    Kwok, Lloyd C.

    1991-01-01

    Algorithm performs simple statistical analysis of noisy signal to yield preliminary indication of whether or not signal contains sinusoidal component. Suitable for preprocessing or preliminary analysis of vibrations, fluctuations in pressure, and other signals that include large random components. Implemented on personal computer by easy-to-use program.

  6. Multiresolution representation and numerical algorithms: A brief review

    NASA Technical Reports Server (NTRS)

    Harten, Amiram

    1994-01-01

    In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.

  7. Fast deterministic algorithm for EEE components classification

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, L. A.; Antamoshkin, A. N.; Masich, I. S.

    2015-10-01

    Authors consider the problem of automatic classification of the electronic, electrical and electromechanical (EEE) components based on results of the test control. Electronic components of the same type used in a high- quality unit must be produced as a single production batch from a single batch of the raw materials. Data of the test control are used for splitting a shipped lot of the components into several classes representing the production batches. Methods such as k-means++ clustering or evolutionary algorithms combine local search and random search heuristics. The proposed fast algorithm returns a unique result for each data set. The result is comparatively precise. If the data processing is performed by the customer of the EEE components, this feature of the algorithm allows easy checking of the results by a producer or supplier.

  8. Software Management Environment (SME): Components and algorithms

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1994-01-01

    This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'

  9. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  10. Parallel processing of numerical transport algorithms

    SciTech Connect

    Wienke, B.R.; Hiromoto, R.E.

    1984-01-01

    The multigroup, discrete ordinates representation for the linear transport equation enjoys widespread computational use and popularity. Serial solution schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, we investigate the parallel structure and extension of a number of standard S/sub n/ approaches. Concurrent inner sweeps, coupled acceleration techniques, synchronized inner-outer loops, and chaotic iteration are described, and results of computations are contrasted. The multigroup representation and serial iteration methods are also detailed. The basic iterative S/sub n/ method lends itself to parallel tasking, portably affording an effective medium for performing transport calculations on future architectures. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. We find basic inner-outer and chaotic iteration strategies both easily support comparably high degrees of parallelism. Both accommodate parallel rebalance and diffusion acceleration and appear as robust and viable parallel techniques for S/sub n/ production work.

  11. The numerical simulation of accelerator components

    SciTech Connect

    Herrmannsfeldt, W.B.; Hanerfeld, H.

    1987-05-01

    The techniques of the numerical simulation of plasmas can be readily applied to problems in accelerator physics. Because the problems usually involve a single component ''plasma,'' and times that are at most, a few plasma oscillation periods, it is frequently possible to make very good simulations with relatively modest computation resources. We will discuss the methods and illustrate them with several examples. One of the more powerful techniques of understanding the motion of charged particles is to view computer-generated motion pictures. We will show several little movie strips to illustrate the discussions. The examples will be drawn from the application areas of Heavy Ion Fusion, electron-positron linear colliders and injectors for free-electron lasers. 13 refs., 10 figs., 2 tabs.

  12. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  13. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1982-01-01

    Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.

  14. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  15. Numerical comparison of Kalman filter algorithms - Orbit determination case study

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Thornton, C. L.

    1977-01-01

    Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

  16. A Numerical Instability in an ADI Algorithm for Gyrokinetics

    SciTech Connect

    E.A. Belli; G.W. Hammett

    2004-12-17

    We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v{sub {parallel}} {partial_derivative}/{partial_derivative}z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms.

  17. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1981-01-01

    Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.

  18. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  19. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  20. Efficient algorithm to compute mutually connected components in interdependent networks.

    PubMed

    Hwang, S; Choi, S; Lee, Deokjae; Kahng, B

    2015-02-01

    Mutually connected components (MCCs) play an important role as a measure of resilience in the study of interdependent networks. Despite their importance, an efficient algorithm to obtain the statistics of all MCCs during the removal of links has thus far been absent. Here, using a well-known fully dynamic graph algorithm, we propose an efficient algorithm to accomplish this task. We show that the time complexity of this algorithm is approximately O(N(1.2)) for random graphs, which is more efficient than O(N(2)) of the brute-force algorithm. We confirm the correctness of our algorithm by comparing the behavior of the order parameter as links are removed with existing results for three types of double-layer multiplex networks. We anticipate that this algorithm will be used for simulations of large-size systems that have been previously inaccessible. PMID:25768559

  1. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  2. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  3. Mathematical model and numerical algorithm for aerodynamical flow

    NASA Astrophysics Data System (ADS)

    Shaydurov, V.; Shchepanovskaya, G.; Yakubovich, M.

    2016-10-01

    In the paper, a mathematical model and a numerical algorithm are proposed for modeling an air flow. The proposed model is based on the time-dependent Navier-Stokes equations for viscous heat-conducting gas. The energy equation and the state equations are modified to account for two kinds of `internal' energy. The first one is the usual translational and rotational energy of molecules which defines the thermodynamical temperature and the pressure. The second one is the subgrid energy of small turbulent eddies. A numerical algorithm is proposed for solving the formulated initial-boundary value problem as a combination of the semi-Lagrangian approximation for Lagrange transport derivatives and the conforming finite element method for other terms. A numerical example illustrates these approaches.

  4. Numerical algorithms for the atomistic dopant profiling of semiconductor materials

    NASA Astrophysics Data System (ADS)

    Aghaei Anvigh, Samira

    In this dissertation, we investigate the possibility to use scanning microscopy such as scanning capacitance microscopy (SCM) and scanning spreading resistance microscopy (SSRM) for the "atomistic" dopant profiling of semiconductor materials. For this purpose, we first analyze the discrete effects of random dopant fluctuations (RDF) on SCM and SSRM measurements with nanoscale probes and show that RDF significantly affects the differential capacitance and spreading resistance of the SCM and SSRM measurements if the dimension of the probe is below 50 nm. Then, we develop a mathematical algorithm to compute the spatial coordinates of the ionized impurities in the depletion region using a set of scanning microscopy measurements. The proposed numerical algorithm is then applied to extract the (x, y, z) coordinates of ionized impurities in the depletion region in the case of a few semiconductor materials with different doping configuration. The numerical algorithm developed to solve the above inverse problem is based on the evaluation of doping sensitivity functions of the differential capacitance, which show how sensitive the differential capacitance is to doping variations at different locations. To develop the numerical algorithm we first express the doping sensitivity functions in terms of the Gâteaux derivative of the differential capacitance, use Riesz representation theorem, and then apply a gradient optimization approach to compute the locations of the dopants. The algorithm is verified numerically using 2-D simulations, in which the C-V curves are measured at 3 different locations on the surface of the semiconductor. Although the cases studied in this dissertation are much idealized and, in reality, the C-V measurements are subject to noise and other experimental errors, it is shown that if the differential capacitance is measured precisely, SCM measurements can be potentially used for the "atomistic" profiling of ionized impurities in doped semiconductors.

  5. An algorithm for the numerical solution of linear differential games

    SciTech Connect

    Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.

  6. Algorithms for the Fractional Calculus: A Selection of Numerical Methods

    NASA Technical Reports Server (NTRS)

    Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.

    2003-01-01

    Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.

  7. Two Strategies to Speed up Connected Component LabelingAlgorithms

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Suzuki, Kenji

    2005-11-13

    This paper presents two new strategies to speed up connectedcomponent labeling algorithms. The first strategy employs a decisiontreeto minimize the work performed in the scanning phase of connectedcomponent labeling algorithms. The second strategy uses a simplifiedunion-find data structure to represent the equivalence information amongthe labels. For 8-connected components in atwo-dimensional (2D) image,the first strategy reduces the number of neighboring pixels visited from4 to7/3 on average. In various tests, using a decision tree decreases thescanning time by a factor of about 2. The second strategy uses a compactrepresentation of the union-find data structure. This strategysignificantly speeds up the labeling algorithms. We prove analyticallythat a labeling algorithm with our simplified union-find structure hasthe same optimal theoretical time complexity as do the best labelingalgorithms. By extensive experimental measurements, we confirm theexpected performance characteristics of the new labeling algorithms anddemonstrate that they are faster than other optimal labelingalgorithms.

  8. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.

    2016-01-01

    Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

  9. Independent component analysis based two-step phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofei; Shou, Junwei; Lu, Xiaoxu; Yin, Zhenxing; Tian, Jindong; Li, Dong; Zhong, Liyun

    2016-10-01

    Based on the independent component analysis (ICA), we achieve phase retrieval from two-frame phase-shifting interferograms with unknown phase shifts. First, we remove the background of interferogram with a Gaussian high-pass filter. Second, the background-removed interferograms are decomposed into a group of mutual independent components through performing the pixel position recombination of an interferogram. Third, the phase shifts and the measured phase can be retrieved with high accuracy from the ratio of independent components. Compared with the existing two-step phase retrieval algorithms, both the simulation calculation and experimental result show that the proposed ICA based two-step algorithm reveals the advantage in the accuracy improvement of phase retrieval.

  10. Algorithm-Based Fault Tolerance for Numerical Subroutines

    NASA Technical Reports Server (NTRS)

    Tumon, Michael; Granat, Robert; Lou, John

    2007-01-01

    A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

  11. CxCxC: compressed connected components labeling algorithm

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Dwivedi, Shekhar

    2007-03-01

    We propose Compressed Connected Components (CxCxC), a new fast algorithm for labeling connected components in binary images making use of compression. We break the given 3D image into non-overlapping 2x2x2 cube of voxels (2x2 square of pixels for 2D) and encode these binary values as the bits of a single decimal integer. We perform the connected component labeling on the resulting compressed data set. A recursive labeling approach by the use of smart-masks on the encoded decimal values is performed. The output is finally decompressed back to the original size by decimal-to-binary conversion of the cubes to retrieve the connected components in a lossless fashion. We demonstrate the efficacy of such encoding and labeling for large data sets (up to 1392 x 1040 for 2D and 512 x 512 x 336 for 3D). CxCxC reports a speed gain of 4x for 2D and 12x for 3D with memory savings of 75% for 2D and 88% for 3D over conventional (recursive growing of component labels) connected components algorithm. We also compare our method with those of VTK and ITK and find that we outperform both with speed gains of 3x and 6x for 3D. These features make CxCxC highly suitable for medical imaging and multi-media applications where the size of data sets and the number of connected components can be very large.

  12. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  13. A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries

    SciTech Connect

    Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P

    2003-12-15

    We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.

  14. Rayleigh Wave Numerical Dispersion in a 3D Finite-Difference Algorithm

    NASA Astrophysics Data System (ADS)

    Preston, L. A.; Aldridge, D. F.

    2010-12-01

    A Rayleigh wave propagates laterally without dispersion in the vicinity of the plane stress-free surface of a homogeneous and isotropic elastic halfspace. The phase speed is independent of frequency and depends only on the Poisson ratio of the medium. However, after temporal and spatial discretization, a Rayleigh wave simulated by a 3D staggered-grid finite-difference (FD) seismic wave propagation algorithm suffers from frequency- and direction-dependent numerical dispersion. The magnitude of this dispersion depends critically on FD algorithm implementation details. Nevertheless, proper gridding can control numerical dispersion to within an acceptable level, leading to accurate Rayleigh wave simulations. Many investigators have derived dispersion relations appropriate for body wave propagation by various FD algorithms. However, the situation for surface waves is less well-studied. We have devised a numerical search procedure to estimate Rayleigh phase speed and group speed curves for 3D O(2,2) and O(2,4) staggered-grid FD algorithms. In contrast with the continuous time-space situation (where phase speed is obtained by extracting the appropriate root of the Rayleigh cubic), we cannot develop a closed-form mathematical formula governing the phase speed. Rather, we numerically seek the particular phase speed that leads to a solution of the discrete wave propagation equations, while holding medium properties, frequency, horizontal propagation direction, and gridding intervals fixed. Group speed is then obtained by numerically differentiating the phase speed with respect to frequency. The problem is formulated for an explicit stress-free surface positioned at two different levels within the staggered spatial grid. Additionally, an interesting variant involving zero-valued medium properties above the surface is addressed. We refer to the latter as an implicit free surface. Our preliminary conclusion is that an explicit free surface, implemented with O(4) spatial FD

  15. The association between symbolic and nonsymbolic numerical magnitude processing and mental versus algorithmic subtraction in adults.

    PubMed

    Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert

    2016-03-01

    There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge. PMID:26914586

  16. An Image Reconstruction Algorithm for Electrical Capacitance Tomography Based on Robust Principle Component Analysis

    PubMed Central

    Lei, Jing; Liu, Shi; Wang, Xueyao; Liu, Qibin

    2013-01-01

    Electrical capacitance tomography (ECT) attempts to reconstruct the permittivity distribution of the cross-section of measurement objects from the capacitance measurement data, in which reconstruction algorithms play a crucial role in real applications. Based on the robust principal component analysis (RPCA) method, a dynamic reconstruction model that utilizes the multiple measurement vectors is presented in this paper, in which the evolution process of a dynamic object is considered as a sequence of images with different temporal sparse deviations from a common background. An objective functional that simultaneously considers the temporal constraint and the spatial constraint is proposed, where the images are reconstructed by a batching pattern. An iteration scheme that integrates the advantages of the alternating direction iteration optimization (ADIO) method and the forward-backward splitting (FBS) technique is developed for solving the proposed objective functional. Numerical simulations are implemented to validate the feasibility of the proposed algorithm. PMID:23385418

  17. Numerical algorithm for solving mathematical programming problems with a smooth surface as a constraint

    NASA Astrophysics Data System (ADS)

    Chernyaev, Yu. A.

    2016-03-01

    A numerical algorithm for minimizing a convex function on a smooth surface is proposed. The algorithm is based on reducing the original problem to a sequence of convex programming problems. Necessary extremum conditions are examined, and the convergence of the algorithm is analyzed.

  18. Topics in Randomized Algorithms for Numerical Linear Algebra

    NASA Astrophysics Data System (ADS)

    Holodnak, John T.

    In this dissertation, we present results for three topics in randomized algorithms. Each topic is related to random sampling. We begin by studying a randomized algorithm for matrix multiplication that randomly samples outer products. We show that if a set of deterministic conditions is satisfied, then the algorithm can compute the exact product. In addition, we show probabilistic bounds on the two norm relative error of the algorithm. two norm relative error of the algorithm. In the second part, we discuss the sensitivity of leverage scores to perturbations. Leverage scores are scalar quantities that give a notion of importance to the rows of a matrix. They are used as sampling probabilities in many randomized algorithms. We show bounds on the difference between the leverage scores of a matrix and a perturbation of the matrix. In the last part, we approximate functions over an active subspace of parameters. To identify the active subspace, we apply an algorithm that relies on a random sampling scheme. We show bounds on the accuracy of the active subspace identification algorithm and construct an approximation to a function with 3556 parameters using a ten-dimensional active subspace.

  19. Analysis of the numerical effects of parallelism on a parallel genetic algorithm

    SciTech Connect

    Hart, W.E.; Belew, R.K.; Kohn, S.; Baden, S.

    1995-09-18

    This paper examines the effects of relaxed synchronization on both the numerical and parallel efficiency of parallel genetic algorithms (GAs). We describe a coarse-grain geographically structured parallel genetic algorithm. Our experiments show that asynchronous versions of these algorithms have a lower run time than-synchronous GAs. Furthermore, we demonstrate that this improvement in performance is partly due to the fact that the numerical efficiency of the asynchronous genetic algorithm is better than the synchronous genetic algorithm. Our analysis includes a critique of the utility of traditional parallel performance measures for parallel GAs, and we evaluate the claims made by several researchers that parallel GAs can have superlinear speedup.

  20. A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results

    NASA Astrophysics Data System (ADS)

    Carrano, Charles S.; Rino, Charles L.

    2016-06-01

    We extend the power law phase screen theory for ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component inverse power law spectrum. The two-component model includes, as special cases, an unmodified power law and a modified power law with spectral break that may assume the role of an outer scale, intermediate break scale, or inner scale. As such, it provides a framework for investigating the effects of a spectral break on the scintillation statistics. Using this spectral model, we solve the fourth moment equation governing intensity variations following propagation through two-dimensional field-aligned irregularities in the ionosphere. A specific normalization is invoked that exploits self-similar properties of the structure to achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. The numerical algorithm is validated using new theoretical predictions for the behavior of the scintillation index and intensity correlation length under strong scatter conditions. A series of numerical experiments are conducted to investigate the morphologies of the intensity spectrum, scintillation index, and intensity correlation length as functions of the spectral indices and strength of scatter; retrieve phase screen parameters from intensity scintillation observations; explore the relative contributions to the scintillation due to large- and small-scale ionospheric structures; and quantify the conditions under which a general spectral break will influence the scintillation statistics.

  1. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  2. Numerical Optimization Algorithms and Software for Systems Biology

    SciTech Connect

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  3. Numerical Simulation of Cast Distortion in Gas Turbine Engine Components

    NASA Astrophysics Data System (ADS)

    Inozemtsev, A. A.; Dubrovskaya, A. S.; Dongauser, K. A.; Trufanov, N. A.

    2015-06-01

    In this paper the process of multiple airfoilvanes manufacturing through investment casting is considered. The mathematical model of the full contact problem is built to determine stress strain state in a cast during the process of solidification. Studies are carried out in viscoelastoplastic statement. Numerical simulation of the explored process is implemented with ProCASTsoftware package. The results of simulation are compared with the real production process. By means of computer analysis the optimization of technical process parameters is done in order to eliminate the defect of cast walls thickness variation.

  4. Numerical comparison of discrete Kalman filter algorithms - Orbit determination case study

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Thornton, C. L.

    1976-01-01

    Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

  5. Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving

    2014-02-01

    The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.

  6. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  7. Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.

    ERIC Educational Resources Information Center

    Jacquot, Raymond G.; And Others

    1985-01-01

    Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)

  8. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1987-01-01

    This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also.

  9. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Voigt, Robert G.; Romine, Charles H.

    1988-01-01

    This is a bibliography on numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are also listed.

  10. A Numerical Algorithm for Finding Solution of Cross-Coupled Algebraic Riccati Equations

    NASA Astrophysics Data System (ADS)

    Mukaidani, Hiroaki; Yamamoto, Seiji; Yamamoto, Toru

    In this letter, a computational approach for solving cross-coupled algebraic Riccati equations (CAREs) is investigated. The main purpose of this letter is to propose a new algorithm that combines Newton's method with a gradient-based iterative (GI) algorithm for solving CAREs. In particular, it is noteworthy that both a quadratic convergence under an appropriate initial condition and reduction in dimensions for matrix computation are both achieved. A numerical example is provided to demonstrate the efficiency of this proposed algorithm.

  11. Fourier analysis of numerical algorithms for the Maxwell equations

    NASA Technical Reports Server (NTRS)

    Liu, Yen

    1993-01-01

    The Fourier method is used to analyze the dispersive, dissipative, and isotropy errors of various spatial and time discretizations applied to the Maxwell equations on multi-dimensional grids. Both Cartesian grids and non-Cartesian grids based on hexagons and tetradecahedra are studied and compared. The numerical errors are quantitatively determined in terms of phase speed, wave number, propagation direction, gridspacings, and CFL number. The study shows that centered schemes are more efficient than upwind schemes. The non-Cartesian grids yield superior isotropy and higher accuracy than the Cartesian ones. For the centered schemes, the staggered grids produce less errors than the unstaggered ones. A new unstaggered scheme which has all the best properties is introduced. The study also demonstrates that a proper choice of time discretization can reduce the overall numerical errors due to the spatial discretization.

  12. An Algorithm for the Hierarchical Organization of Path Diagrams and Calculation of Components of Expected Covariance.

    ERIC Educational Resources Information Center

    Boker, Steven M.; McArdle, J. J.; Neale, Michael

    2002-01-01

    Presents an algorithm for the production of a graphical diagram from a matrix formula in such a way that its components are logically and hierarchically arranged. The algorithm, which relies on the matrix equations of J. McArdle and R. McDonald (1984), calculates the individual path components of expected covariance between variables and…

  13. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2001-12-14

    Recent progress in simulation methodologies and new, high-performance parallel architectures have made it is possible to perform detailed simulations of multidimensional combustion phenomena using comprehensive kinetics mechanisms. However, as simulation complexity increases, it becomes increasingly difficult to extract detailed quantitative information about the flame from the numerical solution, particularly regarding the details of chemical processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of combustion phenomena. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian viewpoint in which we follow the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system. From this perspective an ''atom'' is part of some molecule that is transported through the domain by advection and diffusion. Reactions ca use the atom to shift from one species to another with the subsequent transport given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion as a suitable random-walk process. Within this probabilistic framework, reactions can be viewed as a Markov process transforming molecule to molecule with given probabilities. In this paper, we discuss the numerical issues in more detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. We also illustrate how the method can be applied to studying the role of cyanochemistry on NOx production in a diffusion flame.

  14. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2004-04-26

    Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.

  15. Thrombosis modeling in intracranial aneurysms: a lattice Boltzmann numerical algorithm

    NASA Astrophysics Data System (ADS)

    Ouared, R.; Chopard, B.; Stahl, B.; Rüfenacht, D. A.; Yilmaz, H.; Courbebaisse, G.

    2008-07-01

    The lattice Boltzmann numerical method is applied to model blood flow (plasma and platelets) and clotting in intracranial aneurysms at a mesoscopic level. The dynamics of blood clotting (thrombosis) is governed by mechanical variations of shear stress near wall that influence platelets-wall interactions. Thrombosis starts and grows below a shear rate threshold, and stops above it. Within this assumption, it is possible to account qualitatively well for partial, full or no occlusion of the aneurysm, and to explain why spontaneous thrombosis is more likely to occur in giant aneurysms than in small or medium sized aneurysms.

  16. Thermal contact algorithms in SIERRA mechanics : mathematical background, numerical verification, and evaluation of performance.

    SciTech Connect

    Copps, Kevin D.; Carnes, Brian R.

    2008-04-01

    We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.

  17. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    SciTech Connect

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  18. A New Efficient Algorithm for the All Sorting Reversals Problem with No Bad Components.

    PubMed

    Wang, Biing-Feng

    2016-01-01

    The problem of finding all reversals that take a permutation one step closer to a target permutation is called the all sorting reversals problem (the ASR problem). For this problem, Siepel had an O(n (3))-time algorithm. Most complications of his algorithm stem from some peculiar structures called bad components. Since bad components are very rare in both real and simulated data, it is practical to study the ASR problem with no bad components. For the ASR problem with no bad components, Swenson et al. gave an O (n(2))-time algorithm. Very recently, Swenson found that their algorithm does not always work. In this paper, a new algorithm is presented for the ASR problem with no bad components. The time complexity is O(n(2)) in the worst case and is linear in the size of input and output in practice.

  19. A very fast algorithm for simultaneously performing connected-component labeling and euler number computing.

    PubMed

    He, Lifeng; Chao, Yuyan

    2015-09-01

    Labeling connected components and calculating the Euler number in a binary image are two fundamental processes for computer vision and pattern recognition. This paper presents an ingenious method for identifying a hole in a binary image in the first scan of connected-component labeling. Our algorithm can perform connected component labeling and Euler number computing simultaneously, and it can also calculate the connected component (object) number and the hole number efficiently. The additional cost for calculating the hole number is only O(H) , where H is the hole number in the image. Our algorithm can be implemented almost in the same way as a conventional equivalent-label-set-based connected-component labeling algorithm. We prove the correctness of our algorithm and use experimental results for various kinds of images to demonstrate the power of our algorithm.

  20. On the complexity of classical and quantum algorithms for numerical problems in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Bessen, Arvid J.

    Our understanding of complex quantum mechanical processes is limited by our inability to solve the equations that govern them except for simple cases. Numerical simulation of quantum systems appears to be our best option to understand, design and improve quantum systems. It turns out, however, that computational problems in quantum mechanics are notoriously difficult to treat numerically. The computational time that is required often scales exponentially with the size of the problem. One of the most radical approaches for treating quantum problems was proposed by Feytiman in 1982 [46]: he suggested that quantum mechanics itself showed a promising way to simulate quantum physics. This idea, the so called quantum computer, showed its potential convincingly in one important regime with the development of Shor's integer factorization algorithm which improves exponentially on the best known classical algorithm. In this thesis we explore six different computational problems from quantum mechanics, study their computational complexity and try to find ways to remedy them. In the first problem we investigate the reasons behind the improved performance of Shor's and similar algorithms. We show that the key quantum part in Shor's algorithm, the quantum phase estimation algorithm, achieves its good performance through the use of power queries and we give lower bounds for all phase estimation algorithms that use power queries that match the known upper bounds. Our research indicates that problems that allow the use of power queries will achieve similar exponential improvements over classical algorithms. We then apply our lower bound technique for power queries to the Sturm-Liouville eigenvalue problem and show matching lower bounds to the upper bounds of Papageorgiou and Wozniakowski [85]. It seems to be very difficult, though, to find nontrivial instances of the Sturm-Lionville problem for which power queries can be simulated efficiently. A quantum computer differs from a

  1. Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning

    NASA Astrophysics Data System (ADS)

    Bradley, Ben K.

    Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and

  2. A stable and efficient numerical algorithm for unconfined aquifer analysis

    SciTech Connect

    Keating, Elizabeth; Zyvoloski, George

    2008-01-01

    The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.

  3. A unified self-stabilizing neural network algorithm for principal and minor components extraction.

    PubMed

    Kong, Xiangyu; Hu, Changhua; Ma, Hongguang; Han, Chongzhao

    2012-02-01

    Recently, many unified learning algorithms have been developed for principal component analysis and minor component analysis. These unified algorithms can be used to extract principal components and, if altered simply by the sign, can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. This paper proposes a unified self-stabilizing neural network learning algorithm for principal and minor components extraction, and studies the stability of the proposed unified algorithm via the fixed-point analysis method. The proposed unified self-stabilizing algorithm for principal and minor components extraction is extended for tracking the principal subspace (PS) and minor subspace (MS). The averaging differential equation and the energy function associated with the unified algorithm for tracking PS and MS are given. It is shown that the averaging differential equation will globally asymptotically converge to an invariance set, and the corresponding energy function exhibit a unique global minimum attained if and only if its state matrices span the PS or MS of the autocorrelation matrix of a vector data stream. It is concluded that the proposed unified algorithm for tracking PS and MS can efficiently track an orthonormal basis of the PS or MS. Simulations are carried out to further illustrate the theoretical results achieved.

  4. Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms

    NASA Astrophysics Data System (ADS)

    Brunner, Christopher W.; Lu, Ping

    2012-09-01

    The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.

  5. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of

  6. Numerical Algorithms for Acoustic Integrals - The Devil is in the Details

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1996-01-01

    The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.

  7. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  8. A numerical solution algorithm and its application to studies of pulsed light fields propagation

    NASA Astrophysics Data System (ADS)

    Banakh, V. A.; Gerasimova, L. O.; Smalikho, I. N.; Falits, A. V.

    2016-08-01

    A new method for studies of pulsed laser beams propagation in a turbulent atmosphere was proposed. The algorithm of numerical simulation is based on the solution of wave parabolic equation for complex spectral amplitude of wave field using method of splitting into physical factors. Examples of the use of the algorithm in the case the propagation pulsed Laguerre-Gaussian beams of femtosecond duration in the turbulence atmosphere has been shown.

  9. Efficient algorithms for numerical simulation of the motion of earth satellites

    NASA Astrophysics Data System (ADS)

    Bordovitsyna, T. V.; Bykova, L. E.; Kardash, A. V.; Fedyaev, Yu. A.; Sharkovskii, N. A.

    1992-08-01

    We briefly present our results obtained during the development and an investigation of the efficacy of algorithms for numerical prediction of the motion of earth satellites (ESs) using computers of different power. High accuracy and efficiency in predicting ES motion are achieved by using higher-order numerical methods, transformations that regularize and stabilize the equations of motion, and a high-precision model of the forces acting on an ES. This approach enables us to construct efficient algorithms of the required accuracy, both for universal computers with a large RAM and for personal computers with very limited capacity.

  10. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  11. Seven-spot ladybird optimization: a novel and efficient metaheuristic algorithm for numerical optimization.

    PubMed

    Wang, Peng; Zhu, Zhouquan; Huang, Shuai

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879

  12. Seven-spot ladybird optimization: a novel and efficient metaheuristic algorithm for numerical optimization.

    PubMed

    Wang, Peng; Zhu, Zhouquan; Huang, Shuai

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.

  13. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  14. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data

    PubMed Central

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. PMID:26858592

  15. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data.

    PubMed

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. PMID:26858592

  16. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  17. An adaptive numeric predictor-corrector guidance algorithm for atmospheric entry vehicles

    NASA Astrophysics Data System (ADS)

    Spratlin, Kenneth Milton

    1987-05-01

    An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

  18. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  19. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  20. Dynamics analysis of electrodynamic satellite tethers. Equations of motion and numerical solution algorithms for the tether

    NASA Technical Reports Server (NTRS)

    Nacozy, P. E.

    1984-01-01

    The equations of motion are developed for a perfectly flexible, inelastic tether with a satellite at its extremity. The tether is attached to a space vehicle in orbit. The tether is allowed to possess electrical conductivity. A numerical solution algorithm to provide the motion of the tether and satellite system is presented. The resulting differential equations can be solved by various existing standard numerical integration computer programs. The resulting differential equations allow the introduction of approximations that can lead to analytical, approximate general solutions. The differential equations allow more dynamical insight of the motion.

  1. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    DOE PAGES

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less

  2. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    SciTech Connect

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges. In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.

  3. Analysis of V-cycle multigrid algorithms for forms defined by numerical quadrature

    SciTech Connect

    Bramble, J.H. . Dept. of Mathematics); Goldstein, C.I.; Pasciak, J.E. . Applied Mathematics Dept.)

    1994-05-01

    The authors describe and analyze certain V-cycle multigrid algorithms with forms defined by numerical quadrature applied to the approximation of symmetric second-order elliptic boundary value problems. This approach can be used for the efficient solution of finite element systems resulting from numerical quadrature as well as systems arising from finite difference discretizations. The results are based on a regularity free theory and hence apply to meshes with local grid refinement as well as the quasi-uniform case. It is shown that uniform (independent of the number of levels) convergence rates often hold for appropriately defined V-cycle algorithms with as few as one smoothing per grid. These results hold even on applications without full elliptic regularity, e.g., a domain in R[sup 2] with a crack.

  4. Thickness determination in textile material design: dynamic modeling and numerical algorithms

    NASA Astrophysics Data System (ADS)

    Xu, Dinghua; Ge, Meibao

    2012-03-01

    Textile material design is of paramount importance in the study of functional clothing design. It is therefore important to determine the dynamic heat and moisture transfer characteristics in the human body-clothing-environment system, which directly determine the heat-moisture comfort level of the human body. Based on a model of dynamic heat and moisture transfer with condensation in porous fabric at low temperature, this paper presents a new inverse problem of textile thickness determination (IPTTD). Adopting the idea of the least-squares method, we formulate the IPTTD into a function minimization problem. By means of the finite-difference method, quasi-solution method and direct search method for one-dimensional minimization problems, we construct iterative algorithms of the approximated solution for the IPTTD. Numerical simulation results validate the formulation of the IPTTD and demonstrate the effectiveness of the proposed numerical algorithms.

  5. Numerical advection algorithms and their role in atmospheric transport and chemistry models

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.

    1987-01-01

    During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.

  6. Numerical advection algorithms and their role in atmospheric transport and chemistry models

    NASA Astrophysics Data System (ADS)

    Rood, Richard B.

    1987-02-01

    During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.

  7. A universal framework for non-deteriorating time-domain numerical algorithms in Maxwell's electrodynamics

    NASA Astrophysics Data System (ADS)

    Fedoseyev, A.; Kansa, E. J.; Tsynkov, S.; Petropavlovskiy, S.; Osintcev, M.; Shumlak, U.; Henshaw, W. D.

    2016-10-01

    We present the implementation of the Lacuna method, that removes a key diffculty that currently hampers many existing methods for computing unsteady electromagnetic waves on unbounded regions. Numerical accuracy and/or stability may deterio-rate over long times due to the treatment of artificial outer boundaries. We describe a developed universal algorithm and software that correct this problem by employing the Huygens' principle and lacunae of Maxwell's equations. The algorithm provides a temporally uniform guaranteed error bound (no deterioration at all), and the software will enable robust electromagnetic simulations in a high-performance computing environment. The methodology applies to any geometry, any scheme, and any boundary condition. It eliminates the long-time deterioration regardless of its origin and how it manifests itself. In retrospect, the lacunae method was first proposed by V. Ryaben'kii and subsequently developed by S. Tsynkov. We have completed development of an innovative numerical methodology for high fidelity error-controlled modeling of a broad variety of electromagnetic and other wave phenomena. Proof-of-concept 3D computations have been conducted that con-vincingly demonstrate the feasibility and effciency of the proposed approach. Our algorithms are being implemented as robust commercial software tools in a standalone module to be combined with existing numerical schemes in several widely used computational electromagnetic codes.

  8. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  9. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  10. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  11. Study on the optimal algorithm prediction of corn leaf component information based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu

    2016-09-01

    Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.

  12. PEGAS: Hydrodynamical code for numerical simulation of the gas components of interacting galaxies

    NASA Astrophysics Data System (ADS)

    Kulikov, Igor

    A new hydrodynamical code for numerical simulation of the gravitational gas dynamics is described in the paper. The code is based on the Fluid-in-Cell method with a Godunov-type scheme at the Eulerian stage. The numerical method was adapted for GPU-based supercomputers. The performance of the code is shown by the simulation of the collision of the gas components of two similar disc galaxies in the course of the central collision of the galaxies in the polar direction.

  13. Simulation of ammonium and chromium transport in porous media using coupling scheme of a numerical algorithm and a stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Schüttrumpf, Holger; Köngeter, Jürgen; Becker, Torsten; Palani, Sundarambal

    2009-01-01

    The migration of the species of chromium and ammonium in groundwater and their effective remediation depend on the various hydro-geological characteristics of the system. The computational modeling of the reactive transport problems is one of the most preferred tools for field engineers in groundwater studies to make decision in pollution abatement. The analytical models are less modular in nature with low computational demand where the modification is difficult during the formulation of different reactive systems. Numerical models provide more detailed information with high computational demand. Coupling of linear partial differential Equations (PDE) for the transport step with a non-linear system of ordinary differential equations (ODE) for the reactive step is the usual mode of solving a kinetically controlled reactive transport equation. This assumption is not appropriate for a system with low concentration of species such as chromium. Such reaction systems can be simulated using a stochastic algorithm. In this paper, a finite difference scheme coupled with a stochastic algorithm for the simulation of the transport of ammonium and chromium in subsurface media has been detailed.

  14. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.

  15. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method. PMID:15593379

  16. Cimlib: A Fully Parallel Application For Numerical Simulations Based On Components Assembly

    NASA Astrophysics Data System (ADS)

    Digonnet, Hugues; Silva, Luisa; Coupez, Thierry

    2007-05-01

    This paper presents CIMLIB with its two main characteristics: an Object Oriented Program and a fully parallel code. CIMLIB aims at providing a set of components that can be organized to build numerical simulation of a certain process. We describe two components: one treats the complex task of parallel remeshing, the other puts the focus on the Finite Element modeling. In a second part, we present some parallel performances and an example of a very large simulation (over a mesh of 25 millions nodes) that begins with the mesh generation and ends up writing results files, all done using 88 processors.

  17. Numerical stability of relativistic beam multidimensional PIC simulations employing the Esirkepov algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc

    2013-09-01

    Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.

  18. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  19. Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs

    NASA Astrophysics Data System (ADS)

    Nikolić, Zoran; Nguyen, Ha Thai; Frantz, Gene

    2007-12-01

    Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs) to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.

  20. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.

    PubMed

    Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  1. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization

    PubMed Central

    Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  2. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

    PubMed Central

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-01-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

  3. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  4. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.

    PubMed

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  5. Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction.

    PubMed

    Krasnopolsky, Vladimir M; Fox-Rabinovitz, Michael S

    2006-03-01

    A new practical application of neural network (NN) techniques to environmental numerical modeling has been developed. Namely, a new type of numerical model, a complex hybrid environmental model based on a synergetic combination of deterministic and machine learning model components, has been introduced. Conceptual and practical possibilities of developing hybrid models are discussed in this paper for applications to climate modeling and weather prediction. The approach presented here uses NN as a statistical or machine learning technique to develop highly accurate and fast emulations for time consuming model physics components (model physics parameterizations). The NN emulations of the most time consuming model physics components, short and long wave radiation parameterizations or full model radiation, presented in this paper are combined with the remaining deterministic components (like model dynamics) of the original complex environmental model--a general circulation model or global climate model (GCM)--to constitute a hybrid GCM (HGCM). The parallel GCM and HGCM simulations produce very similar results but HGCM is significantly faster. The speed-up of model calculations opens the opportunity for model improvement. Examples of developed HGCMs illustrate the feasibility and efficiency of the new approach for modeling complex multidimensional interdisciplinary systems.

  6. New Concepts in Breast Cancer Emerge from Analyzing Clinical Data Using Numerical Algorithms

    PubMed Central

    Retsky, Michael

    2009-01-01

    A small international group has recently challenged fundamental concepts in breast cancer. As a guiding principle in therapy, it has long been assumed that breast cancer growth is continuous. However, this group suggests tumor growth commonly includes extended periods of quasi-stable dormancy. Furthermore, surgery to remove the primary tumor often awakens distant dormant micrometastases. Accordingly, over half of all relapses in breast cancer are accelerated in this manner. This paper describes how a numerical algorithm was used to come to these conclusions. Based on these findings, a dormancy preservation therapy is proposed. PMID:19440287

  7. Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm

    NASA Technical Reports Server (NTRS)

    Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.

    1990-01-01

    Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  8. Two-dimensional atmospheric transport and chemistry model: numerical experiments with a new advection algorithm.

    PubMed

    Shia, R L; Ha, Y L; Wen, J S; Yung, Y L

    1990-05-20

    Extensive testing of the advective scheme, proposed by Prather (1986), has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. We generalize the original scheme to include higher-order moments. In addition, we show how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  9. Using the robust principal component analysis algorithm to remove RF spike artifacts from MR images

    PubMed Central

    Atkinson, David; Nagy, Zoltan; Chan, Rachel W.; Josephs, Oliver; Lythgoe, Mark F.; Ordidge, Roger J.; Thomas, David L.

    2015-01-01

    Purpose Brief bursts of RF noise during MR data acquisition (“k‐space spikes”) cause disruptive image artifacts, manifesting as stripes overlaid on the image. RF noise is often related to hardware problems, including vibrations during gradient‐heavy sequences, such as diffusion‐weighted imaging. In this study, we present an application of the Robust Principal Component Analysis (RPCA) algorithm to remove spike noise from k‐space. Methods: Corrupted k‐space matrices were decomposed into their low‐rank and sparse components using the RPCA algorithm, such that spikes were contained within the sparse component and artifact‐free k‐space data remained in the low‐rank component. Automated center refilling was applied to keep the peaked central cluster of k‐space from misclassification in the sparse component. Results: This algorithm was demonstrated to effectively remove k‐space spikes from four data types under conditions generating spikes: (i) mouse heart T1 mapping, (ii) mouse heart cine imaging, (iii) human kidney diffusion tensor imaging (DTI) data, and (iv) human brain DTI data. Myocardial T1 values changed by 86.1 ± 171 ms following despiking, and fractional anisotropy values were recovered following despiking of DTI data. Conclusion: The RPCA despiking algorithm will be a valuable postprocessing method for retrospectively removing stripe artifacts without affecting the underlying signal of interest. Magn Reson Med 75:2517–2525, 2016. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:26193125

  10. Determination of the stress components of an array of piezoelectric sensors: a numerical study

    NASA Astrophysics Data System (ADS)

    Chiu, W. K.; Heller, M.; Jones, R.

    1997-04-01

    The increasing emphasis on intelligent material systems and structures has resulted in a significant research effort in the areas of embedded and bonded sensors and actuators. Piezoelectric film is one of the many sensing materials available. The piezoelectric sensor output is proportional to changes in surface displacement and can be used to interpret variations in structural and material properties, e.g., the compliance of the structure. The concept of using an array of piezo-sensors to obtain information about the normal stress field in an adhesively bonded joint was described by Dillard and co-workers in 1988. This paper seeks to extend the application of using an array of piezoelectric sensors for determining the in-plane stress field in a structure. To achieve this, a numerical scheme will be required to decouple the piezoelectric sensor signals into their individual stress components. The aim of this paper is to discuss the development of a numerical procedure for the determination of stress components in a structure given a set of signals from an array of piezoelectric thin-film sensors. This set of piezoelectric signals is generated numerically by combining the piezoelectric response equation with a known stress field.

  11. Thermodynamically Consistent Physical Formulation and an Efficient Numerical Algorithm for Incompressible N-Phase Flows

    NASA Astrophysics Data System (ADS)

    Dong, Suchuan

    2015-11-01

    This talk focuses on simulating the motion of a mixture of N (N>=2) immiscible incompressible fluids with given densities, dynamic viscosities and pairwise surface tensions. We present an N-phase formulation within the phase field framework that is thermodynamically consistent, in the sense that the formulation satisfies the conservations of mass/momentum, the second law of thermodynamics and Galilean invariance. We also present an efficient algorithm for numerically simulating the N-phase system. The algorithm has overcome the issues caused by the variable coefficient matrices associated with the variable mixture density/viscosity and the couplings among the (N-1) phase field variables and the flow variables. We compare simulation results with the Langmuir-de Gennes theory to demonstrate that the presented method produces physically accurate results for multiple fluid phases. Numerical experiments will be presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts to demonstrate the capabilities of the method for studying the interactions among multiple types of fluid interfaces. Support from NSF and ONR is gratefully acknowledged.

  12. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis.

    PubMed

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  13. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    PubMed Central

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  14. A Novel Algorithm for Independent Component Analysis with Reference and Methods for Its Applications

    PubMed Central

    Mi, Jian-Xun

    2014-01-01

    This paper presents a stable and fast algorithm for independent component analysis with reference (ICA-R). This is a technique for incorporating available reference signals into the ICA contrast function so as to form an augmented Lagrangian function under the framework of constrained ICA (cICA). The previous ICA-R algorithm was constructed by solving the optimization problem via a Newton-like learning style. Unfortunately, the slow convergence and potential misconvergence limit the capability of ICA-R. This paper first investigates and probes the flaws of the previous algorithm and then introduces a new stable algorithm with a faster convergence speed. There are two other highlights in this paper: first, new approaches, including the reference deflation technique and a direct way of obtaining references, are introduced to facilitate the application of ICA-R; second, a new method is proposed that the new ICA-R is used to recover the complete underlying sources with new advantages compared with other classical ICA methods. Finally, the experiments on both synthetic and real-world data verify the better performance of the new algorithm over both previous ICA-R and other well-known methods. PMID:24826986

  15. Power optimization of digital baseband WCDMA receiver components on algorithmic and architectural level

    NASA Astrophysics Data System (ADS)

    Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.

    2008-05-01

    High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40

  16. Numerical study of 1-D, 3-vector component, thermally-conductive MHD solar wind

    NASA Technical Reports Server (NTRS)

    Han, S.; Wu, S. T.; Dryer, M.

    1993-01-01

    In the present study, transient, 1-dimensional, 3-vector component MHD equations are used to simulate steady and unsteady, thermally conductive MHD solar wind expansions between the solar surface and 1 AU (astronomical unit). A variant of SIMPLE numerical method was used to integrate the equations. Steady state solar wind properties exhibit qualitatively similar behavior with the known Weber-Davies Solutions. Generation of Alfven shock, in addition to the slow and fast MHD shocks, was attempted by the boundary perturbations at the solar surface. Property changes through the disturbance were positively correlated with the fast and slow MHD shocks. Alfven shock was, however, not present in the present simulations.

  17. Plaque components affect wall stress in stented human carotid artery: A numerical study

    NASA Astrophysics Data System (ADS)

    Fan, Zhen-Min; Liu, Xiao; Du, Cheng-Fei; Sun, An-Qiang; Zhang, Nan; Fan, Zhan-Ming; Fan, Yu-Bo; Deng, Xiao-Yan

    2016-09-01

    Carotid artery stenting presents challenges of in-stent restenosis and late thrombosis, which are caused primarily by alterations in the mechanical environment of the artery after stent implantation. The present study constructed patient-specific carotid arterial bifurcation models with lipid pools and calcified components based on magnetic resonance imaging. We numerically analyzed the effects of multicomponent plaques on the distributions of von Mises stresses (VMSs) in the patient-specific models after stenting. The results showed that when a stent was deployed, the large soft lipid pool in atherosclerotic plaques cushioned the host artery and reduced the stress within the arterial wall; however, this resulted in a sharp increase of VMS in the fibrous cap. When compared with the lipid pool, the presence of the calcified components led to slightly increased stresses on the luminal surface. However, when a calcification was located close to the luminal surface of the host artery and the stenosis, the local VMS was elevated. Overall, compared with calcified components, large lipid pools severely damaged the host artery after stenting. Furthermore, damage due to the calcified component may depend on location.

  18. Diabetic retinopathy: a quadtree based blood vessel detection algorithm using RGB components in fundus images.

    PubMed

    Reza, Ahmed Wasif; Eswaran, C; Hati, Subhas

    2008-04-01

    Blood vessel detection in retinal images is a fundamental step for feature extraction and interpretation of image content. This paper proposes a novel computational paradigm for detection of blood vessels in fundus images based on RGB components and quadtree decomposition. The proposed algorithm employs median filtering, quadtree decomposition, post filtration of detected edges, and morphological reconstruction on retinal images. The application of preprocessing algorithm helps in enhancing the image to make it better fit for the subsequent analysis and it is a vital phase before decomposing the image. Quadtree decomposition provides information on the different types of blocks and intensities of the pixels within the blocks. The post filtration and morphological reconstruction assist in filling the edges of the blood vessels and removing the false alarms and unwanted objects from the background, while restoring the original shape of the connected vessels. The proposed method which makes use of the three color components (RGB) is tested on various images of publicly available database. The results are compared with those obtained by other known methods as well as with the results obtained by using the proposed method with the green color component only. It is shown that the proposed method can yield true positive fraction values as high as 0.77, which are comparable to or somewhat higher than the results obtained by other known methods. It is also shown that the effect of noise can be reduced if the proposed method is implemented using only the green color component.

  19. Studies of numerical algorithms for gyrokinetics and the effects of shaping on plasma turbulence

    NASA Astrophysics Data System (ADS)

    Belli, Emily Ann

    Advanced numerical algorithms for gyrokinetic simulations are explored for more effective studies of plasma turbulent transport. The gyrokinetic equations describe the dynamics of particles in 5-dimensional phase space, averaging over the fast gyromotion, and provide a foundation for studying plasma microturbulence in fusion devices and in astrophysical plasmas. Several algorithms for Eulerian/continuum gyrokinetic solvers are compared. An iterative implicit scheme based on numerical approximations of the plasma response is developed. This method reduces the long time needed to set-up implicit arrays, yet still has larger time step advantages similar to a fully implicit method. Various model preconditioners and iteration schemes, including Krylov-based solvers, are explored. An Alternating Direction Implicit algorithm is also studied and is surprisingly found to yield a severe stability restriction on the time step. Overall, an iterative Krylov algorithm might be the best approach for extensions of core tokamak gyrokinetic simulations to edge kinetic formulations and may be particularly useful for studies of large-scale ExB shear effects. The effects of flux surface shape on the gyrokinetic stability and transport of tokamak plasmas are studied using the nonlinear GS2 gyrokinetic code with analytic equilibria based on interpolations of representative JET-like shapes. High shaping is found to be a stabilizing influence on both the linear ITG instability and nonlinear ITG turbulence. A scaling of the heat flux with elongation of chi ˜ kappa-1.5 or kappa-2 (depending on the triangularity) is observed, which is consistent with previous gyrofluid simulations. Thus, the GS2 turbulence simulations are explaining a significant fraction, but not all, of the empirical elongation scaling. The remainder of the scaling may come from (1) the edge boundary conditions for core turbulence, and (2) the larger Dimits nonlinear critical temperature gradient shift due to the

  20. A numerical study of non-collinear wave mixing and generated resonant components.

    PubMed

    Sun, Zhenghao; Li, Fucai; Li, Hongguang

    2016-09-01

    Interaction of two non-collinear nonlinear ultrasonic waves in an elastic half-space with quadratic nonlinearity is investigated in this paper. A hyperbolic system of conservation laws is applied here and a semi-discrete central scheme is used to solve the numerical problem. The numerical results validate that the model can be used as an effective method to generate and evaluate a resonant wave when two primary waves mix together under certain resonant conditions. Features of the resonant wave are analyzed both in the time and frequency domains, and variation trends of the resonant waves together with second harmonics along the propagation path are analyzed. Applied with the pulse-inversion technique, components of resonant waves and second harmonics can be independently extracted and observed without distinguishing times of flight. The results show that under the circumstance of non-collinear wave mixing, both sum and difference resonant components can be clearly obtained especially in the tangential direction of their propagation. For several rays of observation points around the interaction zone, the further it is away from the excitation sources, generally the earlier the maximum of amplitude arises. From the parametric analysis of the phased array, it is found that both the length of array and the density of element have impact on the maximum of amplitude of the resonant waves. The spatial distribution of resonant waves will provide necessary information for the related experiments. PMID:27403643

  1. Conservative numerical simulation of multi-component transport in two-dimensional unsteady shallow water flow

    NASA Astrophysics Data System (ADS)

    Murillo, J.; García-Navarro, P.; Burguete, J.

    2009-08-01

    An explicit finite volume model to simulate two-dimensional shallow water flow with multi-component transport is presented. The governing system of coupled conservation laws demands numerical techniques to avoid unrealistic values of the transported scalars that cannot be avoided by decreasing the size of the time step. The presence of non conservative products such as bed slope and friction terms, and other source terms like diffusion and reaction, can make necessary the reduction of the time step given by the Courant number. A suitable flux difference redistribution that prevents instability and ensures conservation at all times is used to deal with the non-conservative terms and becomes necessary in cases of transient boundaries over dry bed. The resulting method belongs to the category of well-balanced Roe schemes and is able to handle steady cases with flow in motion. Test cases with exact solution, including transient boundaries, bed slope, friction, and reaction terms are used to validate the numerical scheme. Laboratory experiments are used to validate the techniques when dealing with complex systems as the κ-ɛ model. The results of the proposed numerical schemes are compared with the ones obtained when using uncoupled formulations.

  2. Chaotic algorithms: A numerical exploration of the dynamics of a stiff photoconductor model

    SciTech Connect

    Markus, A.S. de

    1997-04-01

    The photoconducting property of semiconductors leads, in general, to a very complex kinetics for the charge carriers due to the non-equilibrium processes involved. In a semi-conductor with one type of trap, the dynamics of the photo-conducting process are described by a set of ordinary coupled non-linear differential equations given by where n and p are the free electron and hole densities, and m the trapped electron density at time t. So far, there is no known closed solution for these set of non-linear differential equations, and therefore, numerical integration techniques have to be employed, as, for example, the standard procedure of the Runge-Kutta (RK) method. Now then, each one of the mechanisms of generation, recombination, and trapping has its own lifetime, which means that different time constants are to be expected in the time dependent behavior of the photocurrent. Thus, depending on the parameters of the model, the system may become stiff if the time scales between n, m, and p separate considerably. This situation may impose a considerable stress upon a fixed step numerical algorithm as the RK, which may produce then unreliable results, and other methods have to be considered. Therefore, the purpose of this note is to examine, for a critical range of parameters, the results of the numerical integration of the stiff system obtained by standard numerical schemes, such as the single-step fourth-order Runge-Kutta method and the multistep Gear method, the latter being appropriate for a rigid system of equations. 7 refs., 2 figs.

  3. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    NASA Astrophysics Data System (ADS)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate

  4. International Symposium on Computational Electronics—Physical Modeling, Mathematical Theory, and Numerical Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yiming

    2007-12-01

    This symposium is an open forum for discussion on the current trends and future directions of physical modeling, mathematical theory, and numerical algorithm in electrical and electronic engineering. The goal is for computational scientists and engineers, computer scientists, applied mathematicians, physicists, and researchers to present their recent advances and exchange experience. We welcome contributions from researchers of academia and industry. All papers to be presented in this symposium have carefully been reviewed and selected. They include semiconductor devices, circuit theory, statistical signal processing, design optimization, network design, intelligent transportation system, and wireless communication. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!

  5. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  6. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    NASA Astrophysics Data System (ADS)

    Dong, S.

    2015-02-01

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N ⩾ 2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N - 1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N - 1) strongly-coupled phase field equations for general order parameters into 2 (N - 1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir-de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  7. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    SciTech Connect

    Dong, S.

    2015-02-15

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  8. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  9. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  10. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  11. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  12. An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys

    SciTech Connect

    Becker, R; Stolken, J; Jannetti, C; Bassani, J

    2003-10-16

    Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numerical simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.

  13. Deconvolution of complex spectra into components by the bee swarm algorithm

    NASA Astrophysics Data System (ADS)

    Yagfarov, R. R.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.; Salakhov, M. Kh

    2016-05-01

    The bee swarm algorithm is adapted for the solution of the problem of deconvolution of complex spectral contours into components. Comparison of biological concepts relating to the behaviour of bees in a colony and mathematical concepts relating to the quality of the obtained solutions is carried out (mean square error, random solutions in the each iteration). Model experiments, which have been realized on the example of a signal representing a sum of three Lorentz contours of various intensity and half-width, confirm the efficiency of the offered approach.

  14. Numerical simulation of alteration of sodium bentonite by diffusion of ionic groundwater components

    SciTech Connect

    Jacobsen, J.S.; Carnahan, C.L.

    1987-12-01

    Experiments measuring the movement of trace amounts of radionuclides through compacted bentonite have typically used unaltered bentonite. Models based on experiments such as these may not lead to accurate predictions of the migration through altered or partially altered bentonite of radionuclides that undergo ion exchange. To address this problem, we have modified an existing transport code to include ion exchange and aqueous complexation reactions. The code is thus able to simulate the diffusion of major ionic groundwater components through bentonite and reactions between the bentonite and groundwater. Numerical simulations have been made to investigate the conversion of sodium bentonite to calcium bentonite for a reference groundwater characteristic of deep granitic formations. 20 refs., 2 figs., 2 tabs.

  15. [Removal Algorithm of Power Line Interference in Electrocardiogram Based on Morphological Component Analysis and Ensemble Empirical Mode Decomposition].

    PubMed

    Zhao, Wei; Xiao, Shixiao; Zhang, Baocan; Huang, Xiaojing; You, Rongyi

    2015-12-01

    Electrocardiogram (ECG) signals are susceptible to be disturbed by 50 Hz power line interference (PLI) in the process of acquisition and conversion. This paper, therefore, proposes a novel PLI removal algorithm based on morphological component analysis (MCA) and ensemble empirical mode decomposition (EEMD). Firstly, according to the morphological differences in ECG waveform characteristics, the noisy ECG signal was decomposed into the mutated component, the smooth component and the residual component by MCA. Secondly, intrinsic mode functions (IMF) of PLI was filtered. The noise suppression rate (NSR) and the signal distortion ratio (SDR) were used to evaluate the effect of de-noising algorithm. Finally, the ECG signals were re-constructed. Based on the experimental comparison, it was concluded that the proposed algorithm had better filtering functions than the improved Levkov algorithm, because it could not only effectively filter the PLI, but also have smaller SDR value. PMID:27079083

  16. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  17. A modeling and numerical algorithm for thermoporomechanics in multiple porosity media for naturally fractured reservoirs

    NASA Astrophysics Data System (ADS)

    Kim, J.; Sonnenthal, E. L.; Rutqvist, J.

    2011-12-01

    Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator

  18. A new blind fault component separation algorithm for a single-channel mechanical signal mixture

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2012-10-01

    A vibration signal collected from a complex machine consists of multiple vibration components, which are system responses excited by several sources. This paper reports a new blind component separation (BCS) method for extracting different mechanical fault features. By applying the proposed method, a single-channel mixed signal can be decomposed into two parts: the periodic and transient subsets. The periodic subset is related to the imbalance, misalignment and eccentricity of a machine. The transient subset refers to abnormal impulsive phenomena, such as those caused by localized bearing faults. The proposed method includes two individual strategies to deal with these different characteristics. The first extracts the sub-Gaussian periodic signal by minimizing the kurtosis of the equalized signals. The second detects the super-Gaussian transient signal by minimizing the smoothness index of the equalized signals. Here, the equalized signals are derived by an eigenvector algorithm that is a successful solution to the blind equalization problem. To reduce the computing time needed to select the equalizer length, a simple optimization method is introduced to minimize the kurtosis and smoothness index, respectively. Finally, simulated multiple-fault signals and a real multiple-fault signal collected from an industrial machine are used to validate the proposed method. The results show that the proposed method is able to effectively decompose the multiple-fault vibration mixture into periodic components and random non-stationary transient components. In addition, the equalizer length can be intelligently determined using the proposed method.

  19. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  20. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    NASA Astrophysics Data System (ADS)

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sébastian, P.

    2010-06-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM® and Samcef® softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  1. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  2. Biphasic indentation of articular cartilage--II. A numerical algorithm and an experimental study.

    PubMed

    Mow, V C; Gibbs, M C; Lai, W M; Zhu, W B; Athanasiou, K A

    1989-01-01

    Part I (Mak et al., 1987, J. Biomechanics 20, 703-714) presented the theoretical solutions for the biphasic indentation of articular cartilage under creep and stress-relaxation conditions. In this study, using the creep solution, we developed an efficient numerical algorithm to compute all three material coefficients of cartilage in situ on the joint surface from the indentation creep experiment. With this method we determined the average values of the aggregate modulus. Poisson's ratio and permeability for young bovine femoral condylar cartilage in situ to be HA = 0.90 MPa, vs = 0.39 and k = 0.44 x 10(-15) m4/Ns respectively, and those for patellar groove cartilage to be HA = 0.47 MPa, vs = 0.24, k = 1.42 x 10(-15) m4/Ns. One surprising finding from this study is that the in situ Poisson's ratio of cartilage (0.13-0.45) may be much less than those determined from measurements performed on excised osteochondral plugs (0.40-0.49) reported in the literature. We also found the permeability of patellar groove cartilage to be several times higher than femoral condyle cartilage. These findings may have important implications on understanding the functional behavior of cartilage in situ and on methods used to determine the elastic moduli of cartilage using the indentation experiments.

  3. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  4. CCARES: A computer algorithm for the reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Gyekenyesi, John P.

    1993-01-01

    Structural components produced from laminated CMC (ceramic matrix composite) materials are being considered for a broad range of aerospace applications that include various structural components for the national aerospace plane, the space shuttle main engine, and advanced gas turbines. Specifically, these applications include segmented engine liners, small missile engine turbine rotors, and exhaust nozzles. Use of these materials allows for improvements in fuel efficiency due to increased engine temperatures and pressures, which in turn generate more power and thrust. Furthermore, this class of materials offers significant potential for raising the thrust-to-weight ratio of gas turbine engines by tailoring directions of high specific reliability. The emerging composite systems, particularly those with silicon nitride or silicon carbide matrix, can compete with metals in many demanding applications. Laminated CMC prototypes have already demonstrated functional capabilities at temperatures approaching 1400 C, which is well beyond the operational limits of most metallic materials. Laminated CMC material systems have several mechanical characteristics which must be carefully considered in the design process. Test bed software programs are needed that incorporate stochastic design concepts that are user friendly, computationally efficient, and have flexible architectures that readily incorporate changes in design philosophy. The CCARES (Composite Ceramics Analysis and Reliability Evaluation of Structures) program is representative of an effort to fill this need. CCARES is a public domain computer algorithm, coupled to a general purpose finite element program, which predicts the fast fracture reliability of a structural component under multiaxial loading conditions.

  5. Nonlinear fitness space structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    NASA Astrophysics Data System (ADS)

    Tiilikainen, J.; Tilli, J.-M.; Bosund, V.; Mattila, M.; Hakkarainen, T.; Airaksinen, V.-M.; Lipsanen, H.

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present.

  6. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    ERIC Educational Resources Information Center

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  7. Numerical simulation of steady and unsteady viscous flow in turbomachinery using pressure based algorithm

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Ho, Y.; Basson, A.

    1993-01-01

    The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake

  8. Essential Oil of Artemisia annua L.: An Extraordinary Component with Numerous Antimicrobial Properties

    PubMed Central

    Bilia, Anna Rita; Sacco, Cristiana; Bergonzi, Maria Camilla; Donato, Rosa

    2014-01-01

    Artemisia annua L. (Asteraceae) is native to China, now naturalised in many other countries, well known as the source of the unique sesquiterpene endoperoxide lactone artemisinin, and used in the treatment of the chloroquine-resistant and cerebral malaria. The essential oil is rich in mono- and sesquiterpenes and represents a by-product with medicinal properties. Besides significant variations in its percentage and composition have been reported (major constituents can be camphor (up to 48%), germacrene D (up to 18.9%), artemisia ketone (up to 68%), and 1,8 cineole (up to 51.5%)), the oil has been subjected to numerous studies supporting exciting antibacterial and antifungal activities. Both gram-positive bacteria (Enterococcus, Streptococcus, Staphylococcus, Bacillus, and Listeria spp.), and gram-negative bacteria (Escherichia, Shigella, Salmonella, Haemophilus, Klebsiella, and Pseudomonas spp.) and other microorganisms (Candida, Saccharomyces, and Aspergillus spp.) have been investigated. However, the experimental studies performed to date used different methods and diverse microorganisms; as a consequence, a comparative analysis on a quantitative basis is very difficult. The aim of this review is to sum up data on antimicrobial activity of A. annua essential oil and its major components to facilitate future approach of microbiological studies in this field. PMID:24799936

  9. A flexible numerical component to simulate surface runoff transport and biogeochemical processes through dense vegetation

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Perez-Ovilla, O.

    2012-12-01

    Methods to estimate surface runoff pollutant removal using dense vegetation buffers (i.e. vegetative filter strips) usually consider a limited number of factors (i.e. filter length, slope) and are in general based on empirical relationships. When an empirical approach is used, the application of the model is limited to those conditions of the data used for the regression equations. The objective of this work is to provide a flexible numerical mechanistic tool to simulate dynamics of a wide range of surface runoff pollutants through dense vegetation and their physical, chemical and biological interactions based on equations defined by the user as part of the model inputs. A flexible water quality model based on the Reaction Simulation Engine (RSE) modeling component is coupled to a transport module based on the traditional Bubnov -Galerkin finite element method to solve the advection-dispersion-reaction equation using the alternating split-operator technique. This coupled transport-reaction model is linked to the VFSMOD-W (http://abe.ufl.edu/carpena/vfsmod) program to mechanistically simulate mobile and stabile pollutants through dense vegetation based on user-defined conceptual models (differential equations written in XML language as input files). The key factors to consider in the creation of a conceptual model are the components in the buffer (i.e. vegetation, soil, sediments) and how the pollutant interacts with them. The biogeochemical reaction component was tested successfully with laboratory and field scale experiments. One of the major advantages when using this tool is that the pollutant transport and removal thought dense vegetation is related to physical and biogeochemical process occurring within the filter. This mechanistic approach increases the range of use of the model to a wide range of pollutants and conditions without modification of the core model. The strength of the model relies on the mechanistic approach used for simulating the removal of

  10. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-10-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  11. Numerical tests for effects of various parameters in niching genetic algorithm applied to regional waveform inversion

    NASA Astrophysics Data System (ADS)

    Li, Cong; Lei, Jianshe

    2014-09-01

    In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.

  12. SOLA-DM: A numerical solution algorithm for transient three-dimensional flows

    SciTech Connect

    Wilson, T.L.; Nichols, B.D.; Hirt, C.W.; Stein, L.R.

    1988-02-01

    SOLA-DM is a three-dimensional time-explicit, finite-difference, Eulerian, fluid-dynamics computer code for solving the time-dependent incompressible Navier-Stokes equations. The solution algorithm (SOLA) evolved from the marker-and-cell (MAC) method, and the code is highly vectorized for efficient performance on a Cray computer. The computational domain is discretized by a mesh of parallelepiped cells in either cartesian or cylindrical geometry. The primary hydrodynamic variables for approximating the solution of the momentum equations are cell-face-centered velocity components and cell-centered pressures. Spatial accuracy is selected by the user to be first or second order; the time differencing is first-order accurate. The incompressibility condition results in an elliptic equation for pressure that is solved by a conjugate gradient method. Boundary conditions of five general types may be chosen: free-slip, no-slip, continuative, periodic, and specified pressure. In addition, internal mesh specifications to model obstacles and walls are provided. SOLA-DM also solves the equations for discrete particle dynamics, permitting the transport of marker particles or other solid particles through the fluid to be modeled. 7 refs., 7 figs.

  13. A novel algorithm and its VLSI architecture for connected component labeling

    NASA Astrophysics Data System (ADS)

    Zhao, Hualong; Sang, Hongshi; Zhang, Tianxu

    2011-11-01

    A novel line-based streaming labeling algorithm with its VLSI architecture is proposed in this paper. Line-based neighborhood examination scheme is used for efficient local connected components extraction. A novel reversed rooted tree hook-up strategy, which is very suitable for hardware implementation, is applied on the mergence stage of equivalent connected components. The reversed rooted tree hook-up strategy significant reduces the requirement of on-chip memory, which makes the chip area smaller. Clock domains crossing FIFOs are also applied for connecting the label core and external memory interface, which makes the label engine working in a higher frequency and raises the throughput of the label engine. Several performance tests have been performed for our proposed hardware implementation. The processing bandwidth of our hardware architecture can reach the I/O transfer boundary according to the external interface clock in all the real image tests. Beside the advantage of reducing the processing time, our hardware implementation can support the image size as large as 4096*4096, which will be very appealing in remote sensing or any other high-resolution image applications. The implementation of proposed architecture is synthesized with SMIC 180nm standard cell library. The work frequency of the label engine reaches 200MHz.

  14. Study of Groundwater Resources Components in the North China Plain based on Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Shao, J.

    2015-12-01

    Over-exploitation of groundwater and induced environmental problems in the North China Plain (NCP) has drawn more and more concerns. Here, we chose three typical hydrogeological units in the NCP, which are Hutuo River alluvial fan (HR), the Tianjin Plain in the central alluvial fan (TJ), and the Yellow river aquifer system (YR). Relying on groundwater numerical models through MODFLOW, the water balances were calculated and analyzed accordingly, especially for quantifying individual recharge and discharge contributing terms. Specifically, (1) In the HR, both natural steady-state flow and transient flow models under human activities were implemented. Results indicated groundwater level decreased by around 40m with extensive exploitation, where the total recharge rate, discharge rate, and over-exploitation rate were calculated. (2) In the TJ, groundwater and land subsidence coupled model was established, where the maximum subsidence rate and decrease of groundwater level was estimated. (3) In the YR, the exploitation rate of the groundwater and recharge rate of the aquifer by the Yellow River were calculated. We found that there are big differences among the components of groundwater recharge of the three typical hydrogeological units. Human activities have a clear effect on the recharge and discharge processes. Thus, rational development and protection policies should be issued. In the piedmont alluvial fan, the groundwater was severely over-exploited. Therefore, reduction of groundwater exploitation and groundwater artificial recharge are needed to get the recharge and discharge balanced. In the middle alluvial fan of the NCP, the confined aquifer has been over-exploited and has resulted in regional land subsidence. It suggests the withdrawal of confined aquifer should be strictly limited, especially at the place where alternative water resources are accessible. In the hydrogeological unit of the YR, the groundwater storage is potentially large for exploitation.

  15. Numerical Roll Reversal Predictor Corrector Aerocapture and Precision Landing Guidance Algorithms for the Mars Surveyor Program 2001 Missions

    NASA Technical Reports Server (NTRS)

    Powell, Richard W.

    1998-01-01

    This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.

  16. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm.

    PubMed

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-10-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset.

  17. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm

    PubMed Central

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-01-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset. PMID:25426433

  18. A numerical algorithm for stress integration of a fiber-fiber kinetics model with Coulomb friction for connective tissue

    NASA Astrophysics Data System (ADS)

    Kojic, M.; Mijailovic, S.; Zdravkovic, N.

    Complex behaviour of connective tissue can be modeled by a fiber-fiber kinetics material model introduced in Mijailovic (1991), Mijailovic et al. (1993). The model is based on the hypothesis of sliding of elastic fibers with Coulomb and viscous friction. The main characteristics of the model were verified experimentally in Mijailovic (1991), and a numerical procedure for one-dimensional tension was developed considering sliding as a contact problem between bodies. In this paper we propose a new and general numerical procedure for calculation of the stress-strain law of the fiber-fiber kinetics model in case of Coulomb friction. Instead of using a contact algorithm (Mijailovic 1991), which is numerically inefficient and never enough reliable, here the history of sliding along the sliding length is traced numerically through a number of segments along the fiber. The algorithm is simple, efficient and reliable and provides solutions for arbitrary cyclic loading, including tension, shear, and tension and shear simultaneously, giving hysteresis loops typical for soft tissue response. The model is built in the finite element technique, providing the possibility of its application to general and real problems. Solved examples illustrate the main characteristics of the model and of the developed numerical method, as well as its applicability to practical problems. Accuracy of some results, for the simple case of uniaxial loading, is verified by comparison with analytical solutions.

  19. A Bayesian Approach to Estimating Coupling Between Neural Components: Evaluation of the Multiple Component, Event-Related Potential (mcERP) Algorithm

    NASA Technical Reports Server (NTRS)

    Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.

  20. CoFlame: A refined and validated numerical algorithm for modeling sooting laminar coflow diffusion flames

    NASA Astrophysics Data System (ADS)

    Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.

    2016-10-01

    Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated. Catalogue identifier: AFAU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 94964 No. of bytes in distributed program, including test data, etc.: 6242986 Distribution format: tar.gz Programming language: Fortran 90, MPI. (Requires an Intel compiler). Computer: Workstations

  1. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast

    NASA Astrophysics Data System (ADS)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.

    2016-04-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  2. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast.

    PubMed

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M

    2016-04-21

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented. PMID:27025665

  3. Novel materials, fabrication techniques and algorithms for microwave and THz components, systems and applications

    NASA Astrophysics Data System (ADS)

    Liang, Min

    This dissertation presents the investigation of several additive manufactured components in RF and THz frequency, as well as the applications of gradient index lens based direction of arrival (DOA) estimation system and broadband electronically beam scanning system. Also, a polymer matrix composite method to achieve artificially controlled effective dielectric properties for 3D printing material is studied. Moreover, the characterization of carbon based nano-materials at microwave and THz frequency, photoconductive antenna array based Terahertz time-domain spectroscopy (THz-TDS) near field imaging system, and a compressive sensing based microwave imaging system is discussed in this dissertation. First, the design, fabrication and characterization of several 3D printed components in microwave and THz frequency are presented. These components include 3D printed broadband Luneburg lens, 3D printed patch antenna, 3D printed multilayer microstrip line structure with vertical transition, THz all-dielectric EMXT waveguide to planar microstrip transition structure and 3D printed dielectric reflectarrays. Second, the additive manufactured 3D Luneburg Lens is employed for DOA estimation application. Using the special property of a Luneburg lens that every point on the surface of the Lens is the focal point of a plane wave incident from the opposite side, 36 detectors are mounted around the surface of the lens to estimate the direction of arrival (DOA) of a microwave signal. The direction finding results using a correlation algorithm show that the averaged error is smaller than 1º for all 360 degree incident angles. Third, a novel broadband electronic scanning system based on Luneburg lens phased array structure is reported. The radiation elements of the phased array are mounted around the surface of a Luneburg lens. By controlling the phase and amplitude of only a few adjacent elements, electronic beam scanning with various radiation patterns can be easily achieved

  4. Interim Progress Report on the Application of an Independent Components Analysis-based Spectral Unmixing Algorithm to Beowulf Computers

    USGS Publications Warehouse

    Lemeshewsky, George

    2003-01-01

    This report describes work done to implement an independent-components-analysis (ICA) -based blind unmixing algorithm on the Eastern Region Geography (ERG) Beowulf computer cluster. It gives a brief description of blind spectral unmixing using ICA-based techniques and a preliminary example of unmixing results for Landsat-7 Thematic Mapper multispectral imagery using a recently reported1,2,3 unmixing algorithm. Also included are computer performance data. The final phase of this work, the actual implementation of the unmixing algorithm on the Beowulf cluster, was not completed this fiscal year and is addressed elsewhere. It is noted that study of this algorithm and its application to land-cover mapping will continue under another research project in the Land Remote Sensing theme into fiscal year 2004.

  5. GA-fisher: A new LDA-based face recognition algorithm with selection of principal components.

    PubMed

    Zheng, Wei-Shi; Lai, Jian-Huang; Yuen, Pong C

    2005-10-01

    This paper addresses the dimension reduction problem in Fisherface for face recognition. When the number of training samples is less than the image dimension (total number of pixels), the within-class scatter matrix (Sw) in Linear Discriminant Analysis (LDA) is singular, and Principal Component Analysis (PCA) is suggested to employ in Fisherface for dimension reduction of Sw so that it becomes nonsingular. The popular method is to select the largest nonzero eigenvalues and the corresponding eigenvectors for LDA. To attenuate the illumination effect, some researchers suggested removing the three eigenvectors with the largest eigenvalues and the performance is improved. However, as far as we know, there is no systematic way to determine which eigenvalues should be used. Along this line, this paper proposes a theorem to interpret why PCA can be used in LDA and an automatic and systematic method to select the eigenvectors to be used in LDA using a Genetic Algorithm (GA). A GA-PCA is then developed. It is found that some small eigenvectors should also be used as part of the basis for dimension reduction. Using the GA-PCA to reduce the dimension, a GA-Fisher method is designed and developed. Comparing with the traditional Fisherface method, the proposed GA-Fisher offers two additional advantages. First, optimal bases for dimensionality reduction are derived from GA-PCA. Second, the computational efficiency of LDA is improved by adding a whitening procedure after dimension reduction. The Face Recognition Technology (FERET) and Carnegie Mellon University Pose, Illumination, and Expression (CMU PIE) databases are used for evaluation. Experimental results show that almost 5 % improvement compared with Fisherface can be obtained, and the results are encouraging.

  6. Computationally Efficient Algorithms for Parameter Estimation and Uncertainty Propagation in Numerical Models of Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Townley, Lloyd R.; Wilson, John L.

    1985-12-01

    Finite difference and finite element methods are frequently used to study aquifer flow; however, additional analysis is required when model parameters, and hence predicted heads are uncertain. Computational algorithms are presented for steady and transient models in which aquifer storage coefficients, transmissivities, distributed inputs, and boundary values may all be simultaneously uncertain. Innovative aspects of these algorithms include a new form of generalized boundary condition; a concise discrete derivation of the adjoint problem for transient models with variable time steps; an efficient technique for calculating the approximate second derivative during line searches in weighted least squares estimation; and a new efficient first-order second-moment algorithm for calculating the covariance of predicted heads due to a large number of uncertain parameter values. The techniques are presented in matrix form, and their efficiency depends on the structure of sparse matrices which occur repeatedly throughout the calculations. Details of matrix structures are provided for a two-dimensional linear triangular finite element model.

  7. Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm

    NASA Technical Reports Server (NTRS)

    Kato, Hiromasa; Tannehill, John C.; Mehta, Unmeel B.

    2003-01-01

    A new parabolized Navier-Stokes (PNS) algorithm has been developed to efficiently compute magnetohydrodynamic (MHD) flows in the low magnetic Reynolds number regime. In this regime, the electrical conductivity is low and the induced magnetic field is negligible compared to the applied magnetic field. The MHD effects are modeled by introducing source terms into the PNS equation which can then be solved in a very efficient manner. To account for upstream (elliptic) effects, the flowfields are computed using multiple streamwise sweeps with an iterated PNS algorithm. Turbulence has been included by modifying the Baldwin-Lomax turbulence model to account for MHD effects. The new algorithm has been used to compute both laminar and turbulent, supersonic, MHD flows over flat plates and supersonic viscous flows in a rectangular MHD accelerator. The present results are in excellent agreement with previous complete Navier-Stokes calculations.

  8. Essentially entangled component of multipartite mixed quantum states, its properties, and an efficient algorithm for its extraction

    NASA Astrophysics Data System (ADS)

    Akulin, V. M.; Kabatiansky, G. A.; Mandilara, A.

    2015-10-01

    Using geometric means, we first consider a density matrix decomposition of a multipartite quantum system of a finite dimension into two density matrices: a separable one, also known as the best separable approximation, and an essentially entangled one, which contains no product state components. We show that this convex decomposition can be achieved in practice with the help of a linear programming algorithm that scales in the general case polynomially with the system dimension. We illustrate the algorithm implementation with an example of a composite system of dimension 12 that undergoes a loss of coherence due to classical noise and we trace the time evolution of its essentially entangled component. We suggest a "geometric" description of entanglement dynamics and demonstrate how it explains the well-known phenomena of sudden death and revival of multipartite entanglements. For a statistical weight loss of the essentially entangled component with time, its average entanglement content is not affected by the coherence loss.

  9. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    NASA Astrophysics Data System (ADS)

    Bor, E.; Turduev, M.; Kurt, H.

    2016-08-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

  10. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light.

    PubMed

    Bor, E; Turduev, M; Kurt, H

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

  11. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light.

    PubMed

    Bor, E; Turduev, M; Kurt, H

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

  12. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    PubMed Central

    Bor, E.; Turduev, M.; Kurt, H.

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

  13. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees.

    PubMed

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-09-18

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods.

  14. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees

    PubMed Central

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-01-01

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597

  15. Novel materials, fabrication techniques and algorithms for microwave and THz components, systems and applications

    NASA Astrophysics Data System (ADS)

    Liang, Min

    This dissertation presents the investigation of several additive manufactured components in RF and THz frequency, as well as the applications of gradient index lens based direction of arrival (DOA) estimation system and broadband electronically beam scanning system. Also, a polymer matrix composite method to achieve artificially controlled effective dielectric properties for 3D printing material is studied. Moreover, the characterization of carbon based nano-materials at microwave and THz frequency, photoconductive antenna array based Terahertz time-domain spectroscopy (THz-TDS) near field imaging system, and a compressive sensing based microwave imaging system is discussed in this dissertation. First, the design, fabrication and characterization of several 3D printed components in microwave and THz frequency are presented. These components include 3D printed broadband Luneburg lens, 3D printed patch antenna, 3D printed multilayer microstrip line structure with vertical transition, THz all-dielectric EMXT waveguide to planar microstrip transition structure and 3D printed dielectric reflectarrays. Second, the additive manufactured 3D Luneburg Lens is employed for DOA estimation application. Using the special property of a Luneburg lens that every point on the surface of the Lens is the focal point of a plane wave incident from the opposite side, 36 detectors are mounted around the surface of the lens to estimate the direction of arrival (DOA) of a microwave signal. The direction finding results using a correlation algorithm show that the averaged error is smaller than 1º for all 360 degree incident angles. Third, a novel broadband electronic scanning system based on Luneburg lens phased array structure is reported. The radiation elements of the phased array are mounted around the surface of a Luneburg lens. By controlling the phase and amplitude of only a few adjacent elements, electronic beam scanning with various radiation patterns can be easily achieved

  16. Artificial algae algorithm with multi-light source for numerical optimization and applications.

    PubMed

    Uymaz, Sait Ali; Tezel, Gulay; Yel, Esra

    2015-12-01

    Artificial algae algorithm (AAA), which is one of the recently developed bio-inspired optimization algorithms, has been introduced by inspiration from living behaviors of microalgae. In AAA, the modification of the algal colonies, i.e. exploration and exploitation is provided with a helical movement. In this study, AAA was modified by implementing multi-light source movement and artificial algae algorithm with multi-light source (AAAML) version was established. In this new version, we propose the selection of a different light source for each dimension that is modified with the helical movement for stronger balance between exploration and exploitation. These light sources have been selected by tournament method and each light source are different from each other. This gives different solutions in the search space. The best of these three light sources provides orientation to the better region of search space. Furthermore, the diversity in the source space is obtained with the worst light source. In addition, the other light source improves the balance. To indicate the performance of AAA with new proposed operators (AAAML), experiments were performed on two different sets. Firstly, the performance of AAA and AAAML was evaluated on the IEEE-CEC'13 benchmark set. The second set was real-world optimization problems used in the IEEE-CEC'11. To verify the effectiveness and efficiency of the proposed algorithm, the results were compared with other state-of-the-art hybrid and modified algorithms. Experimental results showed that the multi-light source movement (MLS) increases the success of the AAA.

  17. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  18. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  19. Numerical Studies of the Robustness the SRPF and DRPF Algorithms for the Control of Chaos when System Parameters Drift

    NASA Astrophysics Data System (ADS)

    Schroder, Kjell; Olsen, Thomas; Wiener, Richard

    2006-11-01

    Recursive Proportional Feedback (RPF) is an algorithm for the control of chaotic systems of great utility and ease of use. Control coefficients are determined from pre- control sampling of the system dynamics. We have adapted this method, in the spirit of the Extended Time-Delay Autosynchronization (ETDAS) method to seek minimal change from each previous value. The two methods so derived, Simple Recursive Proportional Feedback (SRPF) and Doubly Recursive Proportional Feedback (DRPF) have been studied in numerical simulations to determine their robustness when system parameters, other than that used for feedback, drift over time. We present evidence of the range over which each algorithm displays robustness against drift. Rollins et al, Phys. Rev. E 47, R780 (1993). Scolar et al, Phys. Rev. E 50, 3245 (1994).

  20. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  1. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  2. Numerical model a graphene component for the sensing of weak electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Nasswettrova, A.; Fiala, P.; Nešpor, D.; Drexler, P.; Steinbauer, M.

    2015-05-01

    The paper discusses a numerical model and provides an analysis of a graphene coaxial line suitable for sub-micron sensors of magnetic fields. In relation to the presented concept, the target areas and disciplines include biology, medicine, prosthetics, and microscopic solutions for modern actuators or SMART elements. The proposed numerical model is based on an analysis of a periodic structure with high repeatability, and it exploits a graphene polymer having a basic dimension in nanometers. The model simulates the actual random motion in the structure as the source of spurious signals and considers the pulse propagation along the structure; furthermore, the model also examines whether and how the pulse will be distorted at the beginning of the line, given the various ending versions. The results of the analysis are necessary for further use of the designed sensing devices based on graphene structures.

  3. Numerical Simulation of Sintering Process in Ceramic Powder Injection Moulded Components

    SciTech Connect

    Song, J.; Barriere, T.; Gelin, J. C.

    2007-05-17

    A phenomenological model based on viscoplastic constitutive law is presented to describe the sintering process of ceramic components obtained by powder injection moulding. The parameters entering in the model are identified through sintering experiments in dilatometer with the proposed optimization method. The finite element simulations are carried out to predict the density variations and dimensional changes of the components during sintering. A simulation example on the sintering process of hip implant in alumina has been conducted. The simulation results have been compared with the experimental ones. A good agreement is obtained.

  4. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  5. ORDMET: A General Algorithm for Constructing All Numerical Solutions to Ordered Metric Data

    ERIC Educational Resources Information Center

    McClelland, Gary; Coombs, Clyde H.

    1975-01-01

    ORDMET is applicable to structures obtained from additive conjoint measurement designs, unfolding theory, general Fechnerian scaling, types of multidimensional scaling, and ordinal multiple regression. A description is obtained of the space containing all possible numerical representations which can satisfy the structure, size, and shape of which…

  6. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  7. Transient Numerical Modeling of the Combustion of Bi-Component Liquid Droplets: Methanol/Water Mixture

    NASA Technical Reports Server (NTRS)

    Marchese, A. J.; Dryer, F. L.

    1994-01-01

    This study shows that liquid mixtures of methanol and water are attractive candidates for microgravity droplet combustion experiments and associated numerical modeling. The gas phase chemistry for these droplet mixtures is conceptually simple, well understood and substantially validated. In addition, the thermodynamic and transport properties of the liquid mixture have also been well characterized. Furthermore, the results obtained in this study predict that the extinction of these droplets may be observable in ground-based drop to tower experiments. Such experiments will be conducted shortly followed by space-based experiments utilizing the NASA FSDC and DCE experiments.

  8. Numerical Modeling for Hole-Edge Cracking of Advanced High-Strength Steels (AHSS) Components in the Static Bend Test

    NASA Astrophysics Data System (ADS)

    Kim, Hyunok; Mohr, William; Yang, Yu-Ping; Zelenak, Paul; Kimchi, Menachem

    2011-08-01

    Numerical modeling of local formability, such as hole-edge cracking and shear fracture in bending of AHSS, is one of the challenging issues for simulation engineers for prediction and evaluation of stamping and crash performance of materials. This is because continuum-mechanics-based finite element method (FEM) modeling requires additional input data, "failure criteria" to predict the local formability limit of materials, in addition to the material flow stress data input for simulation. This paper presents a numerical modeling approach for predicting hole-edge failures during static bend tests of AHSS structures. A local-strain-based failure criterion and a stress-triaxiality-based failure criterion were developed and implemented in LS-DYNA simulation code to predict hole-edge failures in component bend tests. The holes were prepared using two different methods: mechanical punching and water-jet cutting. In the component bend tests, the water-jet trimmed hole showed delayed fracture at the hole-edges, while the mechanical punched hole showed early fracture as the bending angle increased. In comparing the numerical modeling and test results, the load-displacement curve, the displacement at the onset of cracking, and the final crack shape/length were used. Both failure criteria also enable the numerical model to differentiate between the local formability limit of mechanical-punched and water-jet-trimmed holes. The failure criteria and static bend test developed here are useful to evaluate the local formability limit at a structural component level for automotive crash tests.

  9. Data assimilation into a numerical equatorial ocean model. I. The model and the assimilation algorithm

    NASA Astrophysics Data System (ADS)

    Long, Robert Bryan; Thacker, William Carlisle

    1989-06-01

    Numerical modeling provides a powerful tool for the study of the dynamics of oceans and atmospheres. However, the relevance of modeling results can only be established by reference to observations of the system being modeled. Typical oceanic observation sets are sparse, asynoptic, of mixed type and limited reliability, generally inadequate in some respects, and redundant and inconsistent in others. An optimal procedure for interfacing such data sets with a numerical model is the so-called adjoint method. This procedure effectively assimilates the observations into a run of the numerical model by finding that solution to the model equations that best fits all observations made within some specified space-time interval. The method requires the construction of the adjoint of the numerical model, a process made practical for models of realistic complexity by the work of Thacker and Long. In the present paper, the first of two parts, we illustrate the application of Thacker and Long's approach by constructing a data-assimilating version of an equatorial ocean model incorporating the adjoint method. The model is subsequently run for 5 years to near-steady-state, and exhibits many of the features known to be characteristic of equatorial oceanic flows. Using the last 54 days of the run as a control, a set of simulated sea-level and subsurface-density observations are collected, then successfully assimilated to demonstrate that the procedure can recover the control run, given a generous amount of data. In part II we conduct a sequence of numerical experiments to explore the ability of more limited sets of observations to fix the state of the modeled ocean; in the process, we examine the potential value of sea-level data obtained via satellite altimetry.

  10. On the modeling of equilibrium twin interfaces in a single-crystalline magnetic shape memory alloy sample. II: numerical algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Jiong; Steinmann, Paul

    2016-05-01

    This is part II of this series of papers. The aim of the current paper was to solve the governing PDE system derived in part I numerically, such that the procedure of variant reorientation in a magnetic shape memory alloy (MSMA) sample can be simulated. The sample to be considered in this paper has a 3D cuboid shape and is subject to typical magnetic and mechanical loading conditions. To investigate the demagnetization effect on the sample's response, the surrounding space of the sample is taken into account. By considering the different properties of the independent variables, an iterative numerical algorithm is proposed to solve the governing system. The related mathematical formulas and some techniques facilitating the numerical calculations are introduced. Based on the results of numerical simulations, the distributions of some important physical quantities (e.g., magnetization, demagnetization field, and mechanical stress) in the sample can be determined. Furthermore, the properties of configurational force on the twin interfaces are investigated. By virtue of the twin interface movement criteria derived in part I, the whole procedure of magnetic field- or stress-induced variant reorientations in the MSMA sample can be properly simulated.

  11. Impact of multi-component diffusion in turbulent combustion using direct numerical simulations

    DOE PAGES

    Bruno, Claudio; Sankaran, Vaidyanathan; Kolla, Hemanth; Chen, Jacqueline H.

    2015-08-28

    This study presents the results of DNS of a partially premixed turbulent syngas/air flame at atmospheric pressure. The objective was to assess the importance and possible effects of molecular transport on flame behavior and structure. To this purpose DNS were performed at with two proprietary DNS codes and with three different molecular diffusion transport models: fully multi-component, mixture averaged, and imposing the Lewis number of all species to be unity.

  12. Numerical Predictions on the Final Properties of Metal Injection Moulded Components after Sintering Process

    SciTech Connect

    Song, J.; Barriere, T.; Gelin, J. C.

    2007-04-07

    A macroscopic model based on a viscoplastic constitutive law is presented to describe the sintering process of metallic powder components obtained by injection moulding. The model parameters are identified by the gravitational beam-bending tests in sintering and the sintering experiments in dilatometer. The finite element simulations are carried out to predict the shrinkage, density and strength after sintering. The simulation results have been compared to the experimental ones, and a good agreement has been obtained.

  13. Numerical modeling evapotranspiration flux components in shrub-encroached grassland in Inner Mongolia, China

    NASA Astrophysics Data System (ADS)

    Wang, Pei; Li, Xiao-Yan; Huang, Jie-Yu; Yang, Wen-Xin; Wang, Qi-Dan; Xu, Kun; Zheng, Xiao-Ran

    2016-04-01

    Shrub encroachment into arid grasslands occurs around the world. However, few works on shrub encroachment has been conducted in China. Moreover, its hydrological implications remain poorly investigated in arid and semiarid regions. This study combined a two-source energy balanced model and Newton-Raphson iteration scheme to simulate the evapotranspiration (ET) and their components of shrub-encroached(with 15.4% shrub coverage) grassland in Inner Mongolia. Good agreements of ET flux between modelled and measured by Bowen ratio method with relatively insensitive to uncertainties/errors in the assigned models parameters or in measured input variables for its components illustrated that our model was feasible for simulating evapotranspiration flux components in shrub-encroached grassland. The transpiration fraction(T /ET)account for 58±17%during the growing season. With the designed shrub encroachment extreme scenarios (maximum and minimum coverage),the contribution of shrub to local plant transpiration (Tshrub/T) was 20.06±7%during the growing season. Canopy conductance was the main controlling factor of T /ET. In diurnal scale short wave solar radiation was the direct influential factor while in seasonal scale leaf area index (LAI) and soil water content were the direct influential factors. We find that the seasonal variation of Tshrub/T has a good relationship with ratio of LAIshrub/LAI, and rainfall characteristics widened the difference of contribution of shrub and herbs to ecosystem evapotranspiration.

  14. A component-level failure detection and identification algorithm based on open-loop and closed-loop state estimators

    NASA Astrophysics Data System (ADS)

    You, Seung-Han; Cho, Young Man; Hahn, Jin-Oh

    2013-04-01

    This study presents a component-level failure detection and identification (FDI) algorithm for a cascade mechanical system subsuming a plant driven by an actuator unit. The novelty of the FDI algorithm presented in this study is that it is able to discriminate failure occurring in the actuator unit, the sensor measuring the output of the actuator unit, and the plant driven by the actuator unit. The proposed FDI algorithm exploits the measurement of the actuator unit output together with its estimates generated by open-loop (OL) and closed-loop (CL) estimators to enable FDI at the component's level. In this study, the OL estimator is designed based on the system identification of the actuator unit. The CL estimator, which is guaranteed to be stable against variations in the plant, is synthesized based on the dynamics of the entire cascade system. The viability of the proposed algorithm is demonstrated using a hardware-in-the-loop simulation (HILS), which shows that it can detect and identify target failures reliably in the presence of plant uncertainties.

  15. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  16. On substructuring algorithms and solution techniques for the numerical approximation of partial differential equations

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.; Nicolaides, R. A.

    1986-01-01

    Substructuring methods are in common use in mechanics problems where typically the associated linear systems of algebraic equations are positive definite. Here these methods are extended to problems which lead to nonpositive definite, nonsymmetric matrices. The extension is based on an algorithm which carries out the block Gauss elimination procedure without the need for interchanges even when a pivot matrix is singular. Examples are provided wherein the method is used in connection with finite element solutions of the stationary Stokes equations and the Helmholtz equation, and dual methods for second-order elliptic equations.

  17. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  18. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.

    2006-02-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.

  19. All-electron formalism for total energy strain derivatives and stress tensor components for numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Knuth, Franz; Carbogno, Christian; Atalla, Viktor; Blum, Volker; Scheffler, Matthias

    2015-05-01

    We derive and implement the strain derivatives of the total energy of solids, i.e., the analytic stress tensor components, in an all-electron, numeric atom-centered orbital based density-functional formalism. We account for contributions that arise in the semi-local approximation (LDA/GGA) as well as in the generalized Kohn-Sham case, in which a fraction of exact exchange (hybrid functionals) is included. In this work, we discuss the details of the implementation including the numerical corrections for sparse integrations grids which allow to produce accurate results. We validate the implementation for a variety of test cases by comparing to strain derivatives performed via finite differences. Additionally, we include the detailed definition of the overlapping atom-centered integration formalism used in this work to obtain total energies and their derivatives.

  20. An efficient algorithm for numerical computations of continuous densities of states

    NASA Astrophysics Data System (ADS)

    Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.

    2016-06-01

    In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed

  1. Implementation and testing of a real-time 3-component phase picking program for Earthworm using the CECM algorithm

    NASA Astrophysics Data System (ADS)

    Baker, B. I.; Friberg, P. A.

    2014-12-01

    Modern seismic networks typically deploy three component (3C) sensors, but still fail to utilize all of the information available in the seismograms when performing automated phase picking for real-time event location. In most cases a variation on a short term over long term average threshold detector is used for picking and then an association program is used to assign phase types to the picks. However, the 3C waveforms from an earthquake contain an abundance of information related to the P and S phases in both their polarization and energy partitioning. An approach that has been overlooked and has demonstrated encouraging results is the Component Energy Comparison Method (CECM) by Nagano et al. as published in Geophysics 1989. CECM is well suited to being used in real-time because the calculation is not computationally intensive. Furthermore, the CECM method has fewer tuning variables (3) than traditional pickers in Earthworm such as the Rex Allen algorithm (N=18) or even the Anthony Lomax Filter Picker module (N=5). In addition to computing the CECM detector we study the detector sensitivity by rotating the signal into principle components as well as estimating the P phase onset from a curvature function describing the CECM as opposed to the CECM itself. We present our results implementing this algorithm in a real-time module for Earthworm and show the improved phase picks as compared to the traditional single component pickers using Earthworm.

  2. A fast algorithm for Direct Numerical Simulation of natural convection flows in arbitrarily-shaped periodic domains

    NASA Astrophysics Data System (ADS)

    Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.

    2015-11-01

    A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.

  3. Theory manual for FAROW version 1.1: A numerical analysis of the Fatigue And Reliability Of Wind turbine components

    SciTech Connect

    WUBTERSTEUBMSTEVEB R.; VEERS,PAUL S.

    2000-01-01

    Because the fatigue lifetime of wind turbine components depends on several factors that are highly variable, a numerical analysis tool called FAROW has been created to cast the problem of component fatigue life in a probabilistic framework. The probabilistic analysis is accomplished using methods of structural reliability (FORM/SORM). While the workings of the FAROW software package are defined in the user's manual, this theory manual outlines the mathematical basis. A deterministic solution for the time to failure is made possible by assuming analytical forms for the basic inputs of wind speed, stress response, and material resistance. Each parameter of the assumed forms for the inputs can be defined to be a random variable. The analytical framework is described and the solution for time to failure is derived.

  4. A numerical algorithm to propagate navigation error covariance matrices associated with generalized strapdown inertial measurement units

    NASA Technical Reports Server (NTRS)

    Weir, Kent A.; Wells, Eugene M.

    1990-01-01

    The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.

  5. A Nested Genetic Algorithm for the Numerical Solution of Non-Linear Coupled Equations in Water Quality Modeling

    NASA Astrophysics Data System (ADS)

    García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson

    2010-05-01

    Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in

  6. Experimental assessment of an automatic breast density classification algorithm based on principal component analysis applied to histogram data

    NASA Astrophysics Data System (ADS)

    Angulo, Antonio; Ferrer, Jose; Pinto, Joseph; Lavarello, Roberto; Guerrero, Jorge; Castaneda, Benjamín.

    2015-01-01

    Breast parenchymal density is considered a strong indicator of cancer risk. However, measures of breast density are often qualitative and require the subjective judgment of radiologists. This work proposes a supervised algorithm to automatically assign a BI-RADS breast density score to a digital mammogram. The algorithm applies principal component analysis to the histograms of a training dataset of digital mammograms to create four different spaces, one for each BI-RADS category. Scoring is achieved by projecting the histogram of the image to be classified onto the four spaces and assigning it to the closest class. In order to validate the algorithm, a training set of 86 images and a separate testing database of 964 images were built. All mammograms were acquired in the craniocaudal view from female patients without any visible pathology. Eight experienced radiologists categorized the mammograms according to a BIRADS score and the mode of their evaluations was considered as ground truth. Results show better agreement between the algorithm and ground truth for the training set (kappa=0.74) than for the test set (kappa=0.44) which suggests the method may be used for BI-RADS classification but a better training is required.

  7. New Design Methods and Algorithms for Multi-component Distillation Processes

    SciTech Connect

    2009-02-01

    This factsheet describes a research project whose main goal is to develop methods and software tools for the identification and analysis of optimal multi-component distillation configurations for reduced energy consumption in industrial processes.

  8. Numerical analysis of a smart composite material mechanical component using an embedded long period grating fiber sensor

    NASA Astrophysics Data System (ADS)

    Savastru, Dan; Miclos, Sorin; Savastru, Roxana; Lancranjan, Ion I.

    2015-05-01

    Results obtained by FEM analysis of a smart mechanical part manufactured of reinforced composite materials with embedded long period grating fiber sensors (LPGFS) used for operation monitoring are presented. Fiber smart reinforced composite materials because of their fundamental importance across a broad range of industrial applications, as aerospace industry. The main purpose of the performed numerical analysis consists in final improved design of composite mechanical components providing a feedback useful for further automation of the whole system. The performed numerical analysis is pointing to a correlation of composite material internal mechanical loads applied to LPGFS with the NIR absorption bands peak wavelength shifts. One main idea of the performed numerical analysis relies on the observed fact that a LPGFS embedded inside a composite material undergoes mechanical loads created by the micro scale roughness of the composite fiber network. The effect of this mechanical load consists in bending of the LPGFS. The shifting towards IR and broadening of absorption bands appeared in the LPGFS transmission spectra is modeled according to this observation using the coupled mode approach.

  9. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2016-04-01

    Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.

  10. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    PubMed Central

    2010-01-01

    Background MicroRNAs (miRNAs) are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs) over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM) are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE) based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G2DE based predictor. PMID

  11. A Numerical Algorithm for Determining the Contact Stress of Circular Crowned Roller Compressed between Two Flat Plates

    NASA Astrophysics Data System (ADS)

    Horng, Thin-Lin

    The main purpose of this paper is to explore a numerical algorithm for determining the contact stress when a circular crowned roller is compressed between two plates. To start with, the deformation curve on a plate surface will be derived by using the contact mechanical model. Then, the contact stress distribution along the roller which occurs on the plate surface is divided into three parts: from the center of contact to the edge, the edge and apart from the contact line. The first part is calculated by the elastic contact theorem for the contact subjected to nominal stress between non-crowned parts of roller and plates, the second part is obtained by the classical Hertzian contact solution for the contact between crowned parts of roller and plates, and the third part is simulated as exponential decay. In order to overcome the defect of the half space theorem, in which a plate with infinite thickness is assumed initially, a weighting method is introduced to find the contact stress of the plate with finite thickness. Comparisons with various finite element results indicate that the algorithm for estimating the contact stress of a circular crowned roller compressed between two plates derived in this paper can be a reasonably accurate when a heavy displacement load is applied. This is because the contact area is large under a heavy load, and the effect of stress concentration is smaller in comparison with the case under a light load.

  12. A hybrid color space for skin detection using genetic algorithm heuristic search and principal component analysis technique.

    PubMed

    Maktabdar Oghaz, Mahdi; Maarof, Mohd Aizaini; Zainal, Anazida; Rohani, Mohd Foad; Yaghoubyan, S Hadi

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  13. A hybrid color space for skin detection using genetic algorithm heuristic search and principal component analysis technique.

    PubMed

    Maktabdar Oghaz, Mahdi; Maarof, Mohd Aizaini; Zainal, Anazida; Rohani, Mohd Foad; Yaghoubyan, S Hadi

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications.

  14. A Hybrid Color Space for Skin Detection Using Genetic Algorithm Heuristic Search and Principal Component Analysis Technique

    PubMed Central

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  15. An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization.

    PubMed

    Islam, Sk Minhazul; Das, Swagatam; Ghosh, Saurav; Roy, Subhrajit; Suganthan, Ponnuthurai Nagaratnam

    2012-04-01

    Differential evolution (DE) is one of the most powerful stochastic real parameter optimizers of current interest. In this paper, we propose a new mutation strategy, a fitness-induced parent selection scheme for the binomial crossover of DE, and a simple but effective scheme of adapting two of its most important control parameters with an objective of achieving improved performance. The new mutation operator, which we call DE/current-to-gr_best/1, is a variant of the classical DE/current-to-best/1 scheme. It uses the best of a group (whose size is q% of the population size) of randomly selected solutions from current generation to perturb the parent (target) vector, unlike DE/current-to-best/1 that always picks the best vector of the entire population to perturb the target vector. In our modified framework of recombination, a biased parent selection scheme has been incorporated by letting each mutant undergo the usual binomial crossover with one of the p top-ranked individuals from the current population and not with the target vector with the same index as used in all variants of DE. A DE variant obtained by integrating the proposed mutation, crossover, and parameter adaptation strategies with the classical DE framework (developed in 1995) is compared with two classical and four state-of-the-art adaptive DE variants over 25 standard numerical benchmarks taken from the IEEE Congress on Evolutionary Computation 2005 competition and special session on real parameter optimization. Our comparative study indicates that the proposed schemes improve the performance of DE by a large magnitude such that it becomes capable of enjoying statistical superiority over the state-of-the-art DE variants for a wide variety of test problems. Finally, we experimentally demonstrate that, if one or more of our proposed strategies are integrated with existing powerful DE variants such as jDE and JADE, their performances can also be enhanced.

  16. Three-Dimensional Finite Element Based Numerical Simulation of Machining of Thin-Wall Components with Varying Wall Constraints

    NASA Astrophysics Data System (ADS)

    Joshi, Shrikrishna Nandkishor; Bolar, Gururaj

    2016-06-01

    Control of part deflection and deformation during machining of low rigidity thin-wall components is an important aspect in the manufacture of desired quality products. This paper presents a comparative study on the effect of geometry constraints on the product quality during machining of thin-wall components made of an aerospace alloy aluminum 2024-T351. Three-dimensional nonlinear finite element (FE) based simulations of machining of thin-wall parts were carried out by considering three variations in the wall constraint viz. free wall, wall constrained at one end, and wall with constraints at both the ends. Lagrangian formulation based transient FE model has been developed to simulate the interaction between the workpiece and helical milling cutter. Johnson-Cook material and damage model were adopted to account for material behavior during machining process; damage initiation and chip separation. A modified Coulomb friction model was employed to define the contact between the cutting tool and the workpiece. The numerical model was validated with experimental results and found to be in good agreement. Based on the simulation results it was noted that deflection and deformation were maximum in the thin-wall constrained at one end in comparison with those obtained in other cases. It was noted that three dimensional finite element simulations help in a better way to predict the product quality during precision manufacturing of thin-wall components.

  17. Analysis of the principal component algorithm in phase-shifting interferometry.

    PubMed

    Vargas, J; Quiroga, J Antonio; Belenguer, T

    2011-06-15

    We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.

  18. Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, numerics and applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George Em

    2014-11-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been

  19. A matter of timing: identifying significant multi-dose radiotherapy improvements by numerical simulation and genetic algorithm search.

    PubMed

    Angus, Simon D; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means

  20. Practical analytical backscatter error bars for elastic one-component lidar inversion algorithm.

    PubMed

    Rocadenbosch, Francesc; Reba, M Nadzri Md; Sicard, Michaël; Comerón, Adolfo

    2010-06-10

    We present an analytical formulation to compute the total-backscatter range-dependent error bars from the well-known Klett's elastic-lidar inversion algorithm. A combined error-propagation and statistical formulation approach is used to assess inversion errors in response to the following error sources: observation noise (i.e., signal-to-noise ratio) in the reception channel, the user's uncertainty in the backscatter calibration, and in the (range-dependent) total extinction-to-backscatter ratio provided. The method is validated using a Monte Carlo procedure, where the error bars are computed by inversion of a large population of noisy generated lidar signals, for total optical depths tau < or = 5 and typical user uncertainties, all of which yield a practical tool to compute the sought-after error bars.

  1. Performance comparison of six independent components analysis algorithms for fetal signal extraction from real fMCG data

    NASA Astrophysics Data System (ADS)

    Hild, Kenneth E.; Alleva, Giovanna; Nagarajan, Srikantan; Comani, Silvia

    2007-01-01

    In this study we compare the performance of six independent components analysis (ICA) algorithms on 16 real fetal magnetocardiographic (fMCG) datasets for the application of extracting the fetal cardiac signal. We also compare the extraction results for real data with the results previously obtained for synthetic data. The six ICA algorithms are FastICA, CubICA, JADE, Infomax, MRMI-SIG and TDSEP. The results obtained using real fMCG data indicate that the FastICA method consistently outperforms the others in regard to separation quality and that the performance of an ICA method that uses temporal information suffers in the presence of noise. These two results confirm the previous results obtained using synthetic fMCG data. There were also two notable differences between the studies based on real and synthetic data. The differences are that all six ICA algorithms are independent of gestational age and sensor dimensionality for synthetic data, but depend on gestational age and sensor dimensionality for real data. It is possible to explain these differences by assuming that the number of point sources needed to completely explain the data is larger than the dimensionality used in the ICA extraction.

  2. A single frequency component-based re-estimated MUSIC algorithm for impact localization on complex composite structures

    NASA Astrophysics Data System (ADS)

    Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng

    2015-10-01

    The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy.

  3. γ-TEMPy: Simultaneous Fitting of Components in 3D-EM Maps of Their Assembly Using a Genetic Algorithm

    PubMed Central

    Pandurangan, Arun Prasad; Vasishtan, Daven; Alber, Frank; Topf, Maya

    2015-01-01

    Summary We have developed a genetic algorithm for building macromolecular complexes using only a 3D-electron microscopy density map and the atomic structures of the relevant components. For efficient sampling the method uses map feature points calculated by vector quantization. The fitness function combines a mutual information score that quantifies the goodness of fit with a penalty score that helps to avoid clashes between components. Testing the method on ten assemblies (containing 3–8 protein components) and simulated density maps at 10, 15, and 20 Å resolution resulted in identification of the correct topology in 90%, 70%, and 60% of the cases, respectively. We further tested it on four assemblies with experimental maps at 7.2–23.5 Å resolution, showing the ability of the method to identify the correct topology in all cases. We have also demonstrated the importance of the map feature-point quality on assembly fitting in the lack of additional experimental information. PMID:26655474

  4. Analysis of seismic waves crossing the Santa Clara Valley using the three-component MUSIQUE array algorithm

    NASA Astrophysics Data System (ADS)

    Hobiger, Manuel; Cornou, Cécile; Bard, Pierre-Yves; Le Bihan, Nicolas; Imperatori, Walter

    2016-08-01

    We introduce the MUSIQUE algorithm and apply it to seismic wave field recordings in California. The algorithm is designed to analyse seismic signals recorded by arrays of three-component seismic sensors. It is based on the MUSIC and the quaternion-MUSIC algorithms. In a first step, the MUSIC algorithm is applied in order to estimate the backazimuth and velocity of incident seismic waves and to discriminate between Love and possible Rayleigh waves. In a second step, the polarization parameters of possible Rayleigh waves are analysed using quaternion-MUSIC, distinguishing retrograde and prograde Rayleigh waves and determining their ellipticity. In this study, we apply the MUSIQUE algorithm to seismic wave field recordings of the San Jose Dense Seismic Array. This array has been installed in 1999 in the Evergreen Basin, a sedimentary basin in the Eastern Santa Clara Valley. The analysis includes 22 regional earthquakes with epicenters between 40 and 600 km distant from the array and covering different backazimuths with respect to the array. The azimuthal distribution and the energy partition of the different surface wave types are analysed. Love waves dominate the wave field for the vast majority of the events. For close events in the north, the wave field is dominated by the first harmonic mode of Love waves, for farther events, the fundamental mode dominates. The energy distribution is different for earthquakes occurring north-west and south-east of the array. In both cases, the waves crossing the array are mostly arriving from the respective hemicycle. However, scattered Love waves arriving from the south can be seen for all earthquakes. Combining the information of all events, it is possible to retrieve the Love wave dispersion curves of the fundamental and the first harmonic mode. The particle motion of the fundamental mode of Rayleigh waves is retrograde and for the first harmonic mode, it is prograde. For both modes, we can also retrieve dispersion and

  5. Analysis of seismic waves crossing the Santa Clara Valley using the three-component MUSIQUE array algorithm

    NASA Astrophysics Data System (ADS)

    Hobiger, Manuel; Cornou, Cécile; Bard, Pierre-Yves; Le Bihan, Nicolas; Imperatori, Walter

    2016-10-01

    We introduce the MUSIQUE algorithm and apply it to seismic wavefield recordings in California. The algorithm is designed to analyse seismic signals recorded by arrays of three-component seismic sensors. It is based on the MUSIC and the quaternion-MUSIC algorithms. In a first step, the MUSIC algorithm is applied in order to estimate the backazimuth and velocity of incident seismic waves and to discriminate between Love and possible Rayleigh waves. In a second step, the polarization parameters of possible Rayleigh waves are analysed using quaternion-MUSIC, distinguishing retrograde and prograde Rayleigh waves and determining their ellipticity. In this study, we apply the MUSIQUE algorithm to seismic wavefield recordings of the San Jose Dense Seismic Array. This array has been installed in 1999 in the Evergreen Basin, a sedimentary basin in the Eastern Santa Clara Valley. The analysis includes 22 regional earthquakes with epicentres between 40 and 600 km distant from the array and covering different backazimuths with respect to the array. The azimuthal distribution and the energy partition of the different surface wave types are analysed. Love waves dominate the wavefield for the vast majority of the events. For close events in the north, the wavefield is dominated by the first harmonic mode of Love waves, for farther events, the fundamental mode dominates. The energy distribution is different for earthquakes occurring northwest and southeast of the array. In both cases, the waves crossing the array are mostly arriving from the respective hemicycle. However, scattered Love waves arriving from the south can be seen for all earthquakes. Combining the information of all events, it is possible to retrieve the Love wave dispersion curves of the fundamental and the first harmonic mode. The particle motion of the fundamental mode of Rayleigh waves is retrograde and for the first harmonic mode, it is prograde. For both modes, we can also retrieve dispersion and ellipticity

  6. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

    PubMed

    Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  7. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    PubMed Central

    Dai, Wensheng

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740

  8. Numerous Numerals.

    ERIC Educational Resources Information Center

    Henle, James M.

    This pamphlet consists of 17 brief chapters, each containing a discussion of a numeration system and a set of problems on the use of that system. The numeration systems used include Egyptian fractions, ordinary continued fractions and variants of that method, and systems using positive and negative bases. The book is informal and addressed to…

  9. Computation of aircraft component flow fields at transonic Mach numbers using a three-dimensional Navier-Stokes algorithm

    NASA Technical Reports Server (NTRS)

    Shrewsbury, George D.; Vadyak, Joseph; Schuster, David M.; Smith, Marilyn J.

    1989-01-01

    A computer analysis was developed for calculating steady (or unsteady) three-dimensional aircraft component flow fields. This algorithm, called ENS3D, can compute the flow field for the following configurations: diffuser duct/thrust nozzle, isolated wing, isolated fuselage, wing/fuselage with or without integrated inlet and exhaust, nacelle/inlet, nacelle (fuselage) afterbody/exhaust jet, complete transport engine installation, and multicomponent configurations using zonal grid generation technique. Solutions can be obtained for subsonic, transonic, or hypersonic freestream speeds. The algorithm can solve either the Euler equations for inviscid flow, the thin shear layer Navier-Stokes equations for viscous flow, or the full Navier-Stokes equations for viscous flow. The flow field solution is determined on a body-fitted computational grid. A fully-implicit alternating direction implicit method is employed for the solution of the finite difference equations. For viscous computations, either a two layer eddy-viscosity turbulence model or the k-epsilon two equation transport model can be used to achieve mathematical closure.

  10. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm

    PubMed Central

    2014-01-01

    Background The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. Methods In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Results Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Conclusions Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective. PMID:25474487

  11. Numerical modeling of Non-isothermal two-phase two-component flow process with phase change phenomena in the porous media

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Shao, H.; Thullner, M.; Kolditz, O.

    2014-12-01

    In applications of Deep Geothermal reservoirs, thermal recovery processes, and contaminated groundwater sites, the multiphase multicomponent flow and transport processes are often considered the most important underlying physical process. In particular, the behavior of phase appearance and disappearance is the critical to the performance of many geo-reservoirs, and great interests exit in the scientific community to simulate this coupled process. This work is devoted to the modeling and simulation of two-phase, two components flow and transport in the porous medium, whereas the phase change behavior in non-isothermal conditions is considered. In this work, we have implemented the algorithm developed by Marchand, et al., into the open source scientific software OpenGeoSys. The governing equation is formulated in terms of molar fraction of the light component and mean pressure as the persistent primary variables, which leads to a fully coupled nonlinear PDE system. One of the important advantages of this approach is avoiding the primary variables switching between single phase and two phase zones, so that this uniform system can be applied to describe the behavior of phase change. On the other hand, due to the number of unkown variables closure relationships are also formulated to close the whole equation system by using the approach of complementarity constrains. For the numerical technical scheme: The standard Galerkin Finite element method is applied for space discretization, while a fully implicit scheme for the time discretization, and Newton-Raphson method is utilized for the global linearization, as well as the closure relationship. This model is verified based on one test case developed to simulate the heat pipe problem. This benchmark involves two-phase two-component flow in saturated/unsaturated porous media under non-isothermal condition, including phase change and mineral-water geochemical reactive transport processes. The simulation results will be

  12. Numerical dispersion, stability, and phase-speed for 3D time-domain finite-difference seismic wave propagation algorithms

    NASA Astrophysics Data System (ADS)

    Haney, M. M.; Aldridge, D. F.; Symons, N. P.

    2005-12-01

    Numerical solution of partial differential equations by explicit, time-domain, finite-difference (FD) methods entails approximating temporal and spatial derivatives by discrete function differences. Thus, the solution of the difference equation will not be identical to the solution of the underlying differential equation. Solution accuracy degrades if temporal and spatial gridding intervals are too large. Overly coarse spatial gridding leads to spurious artifacts in the calculated results referred to as numerical dispersion, whereas coarse temporal sampling may produce numerical instability (manifest as unbounded growth in the calculations as FD timestepping proceeds). Quantitative conditions for minimizing dispersion and avoiding instability are developed by deriving the dispersion relation appropriate for the discrete difference equation (or coupled system of difference equations) under examination. A dispersion relation appropriate for FD solution of the 3D velocity-stress system of isotropic elastodynamics, on staggered temporal and spatial grids, is developed. The relation applies to either compressional or shear wave propagation, and reduces to the proper form for acoustic propagation in the limit of vanishing shear modulus. A stability condition and a plane-wave phase-speed formula follow as consequences of the dispersion relation. The mathematical procedure utilized for the derivation is a modern variant of classical von Neumann analysis, and involves a 4D discrete space/time Fourier transform of the nine, coupled, FD updating formulae for particle velocity vector and stress tensor components. The method is generalized to seismic wave propagation within anelastic and poroelastic media, as well as sound wave propagation within a uniformly-moving atmosphere. A significant extension of the approach yields a stability condition for wave propagation across an interface between dissimilar media with strong material contrast (e.g., the earth's surface, the seabed

  13. Simulations of emissivity in passive microwave remote sensing with three-dimensional numerical solutions of Maxwell equations and fast algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lin

    In the first part of the work, we developed coding for large-scale computation to solve 3-dimensional microwave scattering problem. Maxwell integral equations are solved by using MoM with RWG basis functions in conjunction with fast computation algorithms. The cost-effective solutions of parallel and distributed simulation were implemented on a low cost PC cluster, which consists of 32 processors connected to a fast Ethernet switch. More than a million of surface current unknowns were solved at unprecedented speeds. Accurate simulations of emissivities and bistatic coefficients from ocean and soil were achieved. Exponential correlation function and ocean spectrum are implementd for generating soil and ocean surfaces. They have fine scale features with large rms slope. The results were justified by comparison with numerical results from original code, which is based on pulse basis function, and from analytic methods like SPM, and also with experiments. In the second part of the work, fully polarimetric microwave emissions from wind-generated foam-covered ocean surfaces were investigated. The foam is treated as densely packed air bubbles coated with thin seawater coating. The absorption, scattering and extinction coefficients were calculated by Monte Carlo simulations of solutionsof Maxwell equations of a collection of coated particles. The effects of boundary roughness of ocean surfaces were included by using the second-order small perturbation method (SPM) describing the reflection coefficients between foam and ocean. An empirical wave-number spectrum was used to represent the small-scale wind-generated sea surfaces. The theoretical results of four Stokes brightness temperatures with typical parameters of foam in passive remote sensing at 10.8 GHz, 19.0 GHz and 36.5 GHz were illustrated. The azimuth variations of polarimetric brightness temperature were calculated. Emission with various wind speed and foam layer thickness was studied. The results were also compared

  14. Direct Numerical Simulation of Acoustic Waves Interacting with a Shock Wave in a Quasi-1D Convergent-Divergent Nozzle Using an Unstructured Finite Volume Algorithm

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.; Mankbadi, Reda R.

    1995-01-01

    Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.

  15. A Fetal Electrocardiogram Signal Extraction Algorithm Based on Fast One-Unit Independent Component Analysis with Reference

    PubMed Central

    2016-01-01

    Fetal electrocardiogram (FECG) extraction is very important procedure for fetal health assessment. In this article, we propose a fast one-unit independent component analysis with reference (ICA-R) that is suitable to extract the FECG. Most previous ICA-R algorithms only focused on how to optimize the cost function of the ICA-R and payed little attention to the improvement of cost function. They did not fully take advantage of the prior information about the desired signal to improve the ICA-R. In this paper, we first use the kurtosis information of the desired FECG signal to simplify the non-Gaussian measurement function and then construct a new cost function by directly using a nonquadratic function of the extracted signal to measure its non-Gaussianity. The new cost function does not involve the computation of the difference between the function of the Gaussian random vector and that of the extracted signal, which is time consuming. Centering and whitening are also used to preprocess the observed signal to further reduce the computation complexity. While the proposed method has the same error performance as other improved one-unit ICA-R methods, it actually has lower computation complexity than those other methods. Simulations are performed separately on artificial and real-world electrocardiogram signals. PMID:27703492

  16. Independent component analysis (ICA) algorithms for improved spectral deconvolution of overlapped signals in 1H NMR analysis: application to foods and related products.

    PubMed

    Monakhova, Yulia B; Tsikin, Alexey M; Kuballa, Thomas; Lachenmeier, Dirk W; Mushtakova, Svetlana P

    2014-05-01

    The major challenge facing NMR spectroscopic mixture analysis is the overlapping of signals and the arising impossibility to easily recover the structures for identification of the individual components and to integrate separated signals for quantification. In this paper, various independent component analysis (ICA) algorithms [mutual information least dependent component analysis (MILCA); stochastic non-negative ICA (SNICA); joint approximate diagonalization of eigenmatrices (JADE); and robust, accurate, direct ICA algorithm (RADICAL)] as well as deconvolution methods [simple-to-use-interactive self-modeling mixture analysis (SIMPLISMA) and multivariate curve resolution-alternating least squares (MCR-ALS)] are applied for simultaneous (1)H NMR spectroscopic determination of organic substances in complex mixtures. Among others, we studied constituents of the following matrices: honey, soft drinks, and liquids used in electronic cigarettes. Good quality spectral resolution of up to eight-component mixtures was achieved (correlation coefficients between resolved and experimental spectra were not less than 0.90). In general, the relative errors in the recovered concentrations were below 12%. SIMPLISMA and MILCA algorithms were found to be preferable for NMR spectra deconvolution and showed similar performance. The proposed method was used for analysis of authentic samples. The resolved ICA concentrations match well with the results of reference gas chromatography-mass spectrometry as well as the MCR-ALS algorithm used for comparison. ICA deconvolution considerably improves the application range of direct NMR spectroscopy for analysis of complex mixtures.

  17. REVIEW OF THE GOVERNING EQUATIONS, COMPUTATIONAL ALGORITHMS, AND OTHER COMPONENTS OF THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODELING SYSTEM

    EPA Science Inventory

    This article describes the governing equations, computational algorithms, and other components entering into the Community Multiscale Air Quality (CMAQ) modeling system. This system has been designed to approach air quality as a whole by including state-of-the-science capabiliti...

  18. Numerical simulation of two-dimensional heat transfer in composite bodies with application to de-icing of aircraft components. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Chao, D. F. K.

    1983-01-01

    Transient, numerical simulations of the de-icing of composite aircraft components by electrothermal heating were performed for a two dimensional rectangular geometry. The implicit Crank-Nicolson formulation was used to insure stability of the finite-difference heat conduction equations and the phase change in the ice layer was simulated using the Enthalpy method. The Gauss-Seidel point iterative method was used to solve the system of difference equations. Numerical solutions illustrating de-icer performance for various composite aircraft structures and environmental conditions are presented. Comparisons are made with previous studies. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.

  19. Classical two-dimensional numerical algorithm for ?-Induced charge carrier advection-diffusion in Medipix-3 silicon pixel detectors

    NASA Astrophysics Data System (ADS)

    Biamonte, Mason; Idarraga, John

    2013-04-01

    A classical hybrid alternating-direction implicit difference scheme is used to simulate two-dimensional charge carrier advection-diffusion induced by alpha particles incident upon silicon pixel detectors at room temperature in vacuum. A mapping between the results of the simulation and a projection of the cluster size for each incident alpha is constructed. The error between the simulation and the experimental data diminishes with the increase in the applied voltage for the pixels in the central region of the cluster. Simulated peripheral pixel TOT values do not match the data for any value of applied voltage, suggesting possible modifications to the current algorithm from first principles. Coulomb repulsion between charge carriers is built into the algorithm using the Barnes-Hut tree algorithm. The plasma effect arising from the initial presence of holes in the silicon is incorporated into the simulation. The error between the simulation and the data helps identify physics not accounted for in standard literature simulation techniques.

  20. Two- and Three-Dimensional Numerical Experiments Representing Two Limiting Cases of an In-Line Pair of Finger Seal Components

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Steinetz, B. M.; Kudriavtsev, V. V.; Proctor, M. P.; Kiraly, L. James (Technical Monitor)

    2002-01-01

    The work presented here concerns the numerical development and simulation of the flow, pressure patterns and motion of a pair of fingers arranged behind each other and axially aligned in-line. The fingers represent the basic elemental component of a Finger Seal (FS) and form a tight seal around the rotor. Yet their flexibility allows compliance with rotor motion and in a passive-adaptive mode complies also with the hydrodynamic forces induced by the flowing fluid. While the paper does not treat the actual staggered configuration of a finger seal, the inline arrangement represents a first step towards that final goal. The numerical 2-D (axial-radial) and 3-D results presented herein were obtained using a commercial package (CFD-ACE+). Both models use an integrated numerical approach, which couples the hydrodynamic fluid model (Navier-Stokes based) to the solid mechanics code that models the compliance of the fingers.

  1. Contrasting sediment melt and fluid signatures for magma components in the Aeolian Arc: Implications for numerical modeling of subduction systems

    NASA Astrophysics Data System (ADS)

    Zamboni, Denis; Gazel, Esteban; Ryan, Jeffrey G.; Cannatelli, Claudia; Lucchi, Federico; Atlas, Zachary D.; Trela, Jarek; Mazza, Sarah E.; De Vivo, Benedetto

    2016-06-01

    The complex geodynamic evolution of Aeolian Arc in the southern Tyrrhenian Sea resulted in melts with some of the most pronounced along the arc geochemical variation in incompatible trace elements and radiogenic isotopes worldwide, likely reflecting variations in arc magma source components. Here we elucidate the effects of subducted components on magma sources along different sections of the Aeolian Arc by evaluating systematics of elements depleted in the upper mantle but enriched in the subducting slab, focusing on a new set of B, Be, As, and Li measurements. Based on our new results, we suggest that both hydrous fluids and silicate melts were involved in element transport from the subducting slab to the mantle wedge. Hydrous fluids strongly influence the chemical composition of lavas in the central arc (Salina) while a melt component from subducted sediments probably plays a key role in metasomatic reactions in the mantle wedge below the peripheral islands (Stromboli). We also noted similarities in subducting components between the Aeolian Archipelago, the Phlegrean Fields, and other volcanic arcs/arc segments around the world (e.g., Sunda, Cascades, Mexican Volcanic Belt). We suggest that the presence of melt components in all these locations resulted from an increase in the mantle wedge temperature by inflow of hot asthenospheric material from tears/windows in the slab or from around the edges of the sinking slab.

  2. Numerical simulation of cesium and strontium migration through sodium bentonite altered by cation exchange with groundwater components

    SciTech Connect

    Jacobsen, J.S.; Carnahan, C.L.

    1988-10-01

    Numerical simulations have been used to investigate how spatial and temporal changes in the ion exchange properties of bentonite affect the migration of cationic fission products from high-level waste. Simulations in which fission products compete for exchange sites with ions present in groundwater diffusing into the bentonite are compared to simulations in which the exchange properties of bentonite are constant. 12 refs., 3 figs., 2 tabs.

  3. Numerically efficient angle, width, offset, and discontinuity determination of straight lines by the discrete Fourier-bilinear transformation algorithm.

    PubMed

    Lou, X M; Hassebrook, L G; Lhamon, M E; Li, J

    1997-01-01

    We introduce a new method for determining the number of straight lines, line angles, offsets, widths, and discontinuities in complicated images. In this method, line angles are obtained by searching the peaks of a hybrid discrete Fourier and bilinear transformed line angle spectrum. Numerical advantages and performance are demonstrated.

  4. Middle atmosphere project: A radiative heating and cooling algorithm for a numerical model of the large scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    Wehrbein, W. M.; Leovy, C. B.

    1981-01-01

    A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.

  5. A Numerical Algorithm to Calculate the Pressure Distribution of the TPS Front End Due to Desorption Induced by Synchrotron Radiation

    SciTech Connect

    Sheng, I. C.; Kuan, C. K.; Chen, Y. T.; Yang, J. Y.; Hsiung, G. Y.; Chen, J. R.

    2010-06-23

    The pressure distribution is an important aspect of a UHV subsystem in either a storage ring or a front end. The design of the 3-GeV, 400-mA Taiwan Photon Source (TPS) foresees outgassing induced by photons and due to a bending magnet and an insertion device. An algorithm to calculate the photon-stimulated absorption (PSD) due to highly energetic radiation from a synchrotron source is presented. Several results using undulator sources such as IU20 are also presented, and the pressure distribution is illustrated.

  6. Numerical analysis of second harmonic generation for THz-wave in a photonic crystal waveguide using a nonlinear FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Saito, Kyosuke; Tanabe, Tadao; Oyama, Yutaka

    2016-04-01

    We have presented a numerical analysis to describe the behavior of a second harmonic generation (SHG) in THz regime by taking into account for both linear and nonlinear optical susceptibility. We employed a nonlinear finite-difference-time-domain (nonlinear FDTD) method to simulate SHG output characteristics in THz photonic crystal waveguide based on semi insulating gallium phosphide crystal. Unique phase matching conditions originated from photonic band dispersions with low group velocity are appeared, resulting in SHG output characteristics. This numerical study provides spectral information of SHG output in THz PC waveguide. THz PC waveguides is one of the active nonlinear optical devices in THz regime, and nonlinear FDTD method is a powerful tool to design photonic nonlinear THz devices.

  7. Solutions of the Two Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    Leblanc, James

    In this talk we present numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice. In order to provide an assessment of our ability to compute accurate results in the thermodynamic limit we employ numerous methods including auxiliary field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock. We illustrate cases where agreement between different methods is obtained in order to establish benchmark results that should be useful in the validation of future results.

  8. A block iterative finite element algorithm for numerical solution of the steady-state, compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.

    1976-01-01

    An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.

  9. A high-order numerical algorithm for DNS of low-Mach-number reactive flows with detailed chemistry and quasi-spectral accuracy

    NASA Astrophysics Data System (ADS)

    Motheau, E.; Abraham, J.

    2016-05-01

    A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.

  10. Numerical modeling of submarine landslide-generated tsunamis as a component of the Alaska Tsunami Inundation Mapping Project

    USGS Publications Warehouse

    Suleimani, E.; Lee, H.; Haeussler, Peter J.; Hansen, R.

    2006-01-01

    Tsunami waves are a threat for manyAlaska coastal locations, and community preparedness plays an important role in saving lives and property. The GeophysicalInstitute of the University of Alaska Fairbanks participates in the National Tsunami Hazard Mitigation Program by evaluating andmapping potential tsunami inundation of selected coastal communities in Alaska. We develop hypothetical tsunamiscenarios based on the parameters of potential underwater earthquakes and landslides for a specified coastal community. The modeling results are delivered to the community for localtsunami hazard planning and construction of evacuation maps. For the community of Seward, located at the head of Resurrection Bay, tsunami potential from tectonic and submarinelandslide sources must be evaluated for comprehensiveinundation mapping. Recent multi-beam and high-resolution sub-bottom profile surveys of Resurrection Bay show medium- and large-sized blocks, which we interpret as landslide debris that slid in the 1964 earthquake. Numerical modeling of the 1964 underwater slides and tsunamis will help to validate and improve the models. In order to construct tsunami inundation maps for Seward, we combine two different approaches for estimating tsunami risk. First, we observe inundation and runup due to tsunami waves generated by the 1964 earthquake. Next we model tsunami wave dynamics in Resurrection Bay caused by superposition of the local landslide- generated waves and the major tectonic tsunami. We compare modeled and observed values from 1964 to calibrate the numerical tsunami model. In our second approach, we perform a landslide tsunami hazard assessment using underwater slope stability analysis and available characteristics of potentially unstable sediment bodies. The approach produces hypothetical underwater slides and resulting tsunami waves. We use a three-dimensional numerical model of an incompressible viscous slide with full interaction between the slide

  11. A grand canonical genetic algorithm for the prediction of multi-component phase diagrams and testing of empirical potentials

    NASA Astrophysics Data System (ADS)

    Tipton, William W.; Hennig, Richard G.

    2013-12-01

    We present an evolutionary algorithm which predicts stable atomic structures and phase diagrams by searching the energy landscape of empirical and ab initio Hamiltonians. Composition and geometrical degrees of freedom may be varied simultaneously. We show that this method utilizes information from favorable local structure at one composition to predict that at others, achieving far greater efficiency of phase diagram prediction than a method which relies on sampling compositions individually. We detail this and a number of other efficiency-improving techniques implemented in the genetic algorithm for structure prediction code that is now publicly available. We test the efficiency of the software by searching the ternary Zr-Cu-Al system using an empirical embedded-atom model potential. In addition to testing the algorithm, we also evaluate the accuracy of the potential itself. We find that the potential stabilizes several correct ternary phases, while a few of the predicted ground states are unphysical. Our results suggest that genetic algorithm searches can be used to improve the methodology of empirical potential design.

  12. A grand canonical genetic algorithm for the prediction of multi-component phase diagrams and testing of empirical potentials.

    PubMed

    Tipton, William W; Hennig, Richard G

    2013-12-11

    We present an evolutionary algorithm which predicts stable atomic structures and phase diagrams by searching the energy landscape of empirical and ab initio Hamiltonians. Composition and geometrical degrees of freedom may be varied simultaneously. We show that this method utilizes information from favorable local structure at one composition to predict that at others, achieving far greater efficiency of phase diagram prediction than a method which relies on sampling compositions individually. We detail this and a number of other efficiency-improving techniques implemented in the genetic algorithm for structure prediction code that is now publicly available. We test the efficiency of the software by searching the ternary Zr-Cu-Al system using an empirical embedded-atom model potential. In addition to testing the algorithm, we also evaluate the accuracy of the potential itself. We find that the potential stabilizes several correct ternary phases, while a few of the predicted ground states are unphysical. Our results suggest that genetic algorithm searches can be used to improve the methodology of empirical potential design. PMID:24184679

  13. A Numerical Feasibility Study of Three-Component Induction Logging for Three Dimensional Imaging About a Single Borehole

    SciTech Connect

    ALUMBAUGH, DAVID L.; WILT, MICHAEL J.

    1999-08-01

    A theoretical analysis has been completed for a proposed induction logging tool designed to yield data which are used to generate three dimensional images of the region surrounding a well bore. The proposed tool consists of three mutually orthogonal magnetic dipole sources and multiple 3 component magnetic field receivers offset at different distances from the source. The initial study employs sensitivity functions which are derived by applying the Born Approximation to the integral equation that governs the magnetic fields generated by a magnetic dipole source located within an inhomogeneous medium. The analysis has shown that the standard coaxial configuration, where the magnetic moments of both the source and the receiver are aligned with the axis of the well bore, offers the greatest depth of sensitivity away from the borehole compared to any other source-receiver combination. In addition this configuration offers the best signal-to-noise characteristics. Due to the cylindrically symmetric nature of the tool sensitivity about the borehole, the data generated by this configuration can only be interpreted in terms of a two-dimensional cylindrical model. For a fill 3D interpretation the two radial components of the magnetic field that are orthogonal to each other must be measured. Coil configurations where both the source and receiver are perpendicular to the tool axis can also be employed to increase resolution and provide some directional information, but they offer no true 3D information.

  14. Update of upper level turbulence forecast by reducing unphysical components of topography in the numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Park, Sang-Hun; Kim, Jung-Hoon; Sharman, Robert D.; Klemp, Joseph B.

    2016-07-01

    On 2 November 2015, unrealistically large areas of light-or-stronger turbulence were predicted by the WRF-RAP (Weather Research and Forecast Rapid Refresh)-based operational turbulence forecast system over the western U.S. mountainous regions, which were not supported by available observations. These areas are reduced by applying additional terrain averaging, which damps out the unphysical components of small-scale (~2Δx) energy aloft induced by unfiltered topography in the initialization of the WRF model. First, a control simulation with the same design of the WRF-RAP model shows that the large-scale atmospheric conditions are well simulated but predict strong turbulence over the western mountainous region. Four experiments with different levels of additional terrain smoothing are applied in the initialization of the model integrations, which significantly reduce spurious mountain-wave-like features, leading to better turbulence forecasts more consistent with the observed data.

  15. Direct Numerical Simulation of Boiling Multiphase Flows: State-of-the-Art, Modeling, Algorithmic and Computer Needs

    SciTech Connect

    Nourgaliev R.; Knoll D.; Mousseau V.; Berry R.

    2007-04-01

    The state-of-the-art for Direct Numerical Simulation (DNS) of boiling multiphase flows is reviewed, focussing on potential of available computational techniques, the level of current success for their applications to model several basic flow regimes (film, pool-nucleate and wall-nucleate boiling -- FB, PNB and WNB, respectively). Then, we discuss multiphysics and multiscale nature of practical boiling flows in LWR reactors, requiring high-fidelity treatment of interfacial dynamics, phase-change, hydrodynamics, compressibility, heat transfer, and non-equilibrium thermodynamics and chemistry of liquid/vapor and fluid/solid-wall interfaces. Finally, we outline the framework for the {\\sf Fervent} code, being developed at INL for DNS of reactor-relevant boiling multiphase flows, with the purpose of gaining insight into the physics of multiphase flow regimes, and generating a basis for effective-field modeling in terms of its formulation and closure laws.

  16. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    NASA Astrophysics Data System (ADS)

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem

    2015-10-01

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  17. Experimental and numerical investigations on tailored tempering process of a U-channel component with tailored mechanical properties

    SciTech Connect

    Tang, B. T.; Bruschi, S.; Ghiotti, A.; Bariani, P. F.

    2013-12-16

    Hot stamping of quenchenable ultra high strength steels currently represents a promising forming technology for the manufacturing of safety and crash relevant parts. For some applications, such as B-pillars and other structural components that may undergo impact loading, it may be desirable to create regions of the part with tailored mechanical properties. In the paper, a laboratory-scale hot stamped U-channel was manufactured by using a segmented die, which was heated by cartridge heaters and cooled by water channels independently. Local hardness values as low as 289 HV can be achieved using a heated die temperature of 400°C while maintaining a hardness level of 490 HV in the fully cooled region. If the die temperature was increased to 450°C, the Vickers hardness of elements in the heated region was 227 HV, with a reduction in hardness of more than 50%. Optical microscopy was used to verify the microstructure of the as-quenched phases with respect to the heated die temperatures. The FE model of the lab-scale process was developed to capture the overall hardness trends that were observed in the experiments.

  18. Experimental and numerical investigations on tailored tempering process of a U-channel component with tailored mechanical properties

    NASA Astrophysics Data System (ADS)

    Tang, B. T.; Bruschi, S.; Ghiotti, A.; Bariani, P. F.

    2013-12-01

    Hot stamping of quenchenable ultra high strength steels currently represents a promising forming technology for the manufacturing of safety and crash relevant parts. For some applications, such as B-pillars and other structural components that may undergo impact loading, it may be desirable to create regions of the part with tailored mechanical properties. In the paper, a laboratory-scale hot stamped U-channel was manufactured by using a segmented die, which was heated by cartridge heaters and cooled by water channels independently. Local hardness values as low as 289 HV can be achieved using a heated die temperature of 400°C while maintaining a hardness level of 490 HV in the fully cooled region. If the die temperature was increased to 450°C, the Vickers hardness of elements in the heated region was 227 HV, with a reduction in hardness of more than 50%. Optical microscopy was used to verify the microstructure of the as-quenched phases with respect to the heated die temperatures. The FE model of the lab-scale process was developed to capture the overall hardness trends that were observed in the experiments.

  19. Laboratory-scale experiments and numerical modeling of cosolvent flushing of multi-component NAPLs in saturated porous media.

    PubMed

    Agaoglu, Berken; Scheytt, Traugott; Copty, Nadim K

    2012-10-01

    This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations was also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with low flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. Model simulations over-estimated NAPL recovery for high specific discharges and rate-limited mass transfer, suggesting a constant mass transfer coefficient for the entire flushing experiment may not be valid. When multi-component NAPLs are present, the dissolution rate of individual organic compounds (namely, toluene and benzene) into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values. PMID:23010548

  20. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  1. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  2. JTK_CYCLE: an efficient non-parametric algorithm for detecting rhythmic components in genome-scale datasets

    PubMed Central

    Hughes, Michael E.; Hogenesch, John B.; Kornacker, Karl

    2011-01-01

    Circadian rhythms are oscillations of physiology, behavior, and metabolism that have period lengths of 24 hours. In several model organisms and man, circadian clock genes have been characterized and found to be transcription factors. Because of this, researchers have used microarrays to characterize global regulation of gene expression and algorithmic approaches to detect cycling. Here we present a new algorithm, JTK_CYCLE, designed to efficiently identify and characterize cycling variables in large datasets. Compared to COSOPT and the Fisher’s G test, two commonly used methods for detecting cycling transcripts, JTK_CYCLE distinguishes between rhythmic and non-rhythmic transcripts more reliably and efficiently. We also show that JTK_CYCLE’s increased resistance to outliers results in considerably greater sensitivity and specificity. Moreover, JTK_CYCLE accurately measures the period, phase, and amplitude of cycling transcripts, facilitating downstream analyses. Finally, it is several orders of magnitude faster than COSOPT, making it ideal for large scale data sets. We used JTK_CYCLE to analyze legacy data sets including NIH3T3 cells, which have comparatively low amplitude. JTK_CYCLE’s improved power led to the identification of a novel cluster of RNA-interacting genes whose abundance is under clear circadian regulation. These data suggest that JTK_CYCLE is an ideal tool for identifying and characterizing oscillations in genome-scale datasets. PMID:20876817

  3. Quasi-analytical determination of noise-induced error limits in lidar retrieval of aerosol backscatter coefficient by the elastic, two-component algorithm.

    PubMed

    Sicard, Michaël; Comerón, Adolfo; Rocadenbosch, Francisco; Rodríguez, Alejandro; Muñoz, Constantino

    2009-01-10

    The elastic, two-component algorithm is the most common inversion method for retrieving the aerosol backscatter coefficient from ground- or space-based backscatter lidar systems. A quasi-analytical formulation of the statistical error associated to the aerosol backscatter coefficient caused by the use of real, noise-corrupted lidar signals in the two-component algorithm is presented. The error expression depends on the signal-to-noise ratio along the inversion path and takes into account "instantaneous" effects, the effect of the signal-to-noise ratio at the range where the aerosol backscatter coefficient is being computed, as well as "memory" effects, namely, both the effect of the signal-to-noise ratio in the cell where the inversion is started and the cumulative effect of the noise between that cell and the actual cell where the aerosol backscatter coefficient is evaluated. An example is shown to illustrate how the "instantaneous" effect is reduced when averaging the noise-contaminated signal over a number of cells around the range where the inversion is started.

  4. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  5. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  6. A real-time algorithm for the harmonic estimation and frequency tracking of dominant components in fusion plasma magnetic diagnostics

    SciTech Connect

    Alves, D.; Coelho, R. [Associação Euratom Collaboration: JET-EFDA Contributors

    2013-08-15

    The real-time tracking of instantaneous quantities such as frequency, amplitude, and phase of components immerse in noisy signals has been a common problem in many scientific and engineering fields such as power systems and delivery, telecommunications, and acoustics for the past decades. In magnetically confined fusion research, extracting this sort of information from magnetic signals can be of valuable assistance in, for instance, feedback control of detrimental magnetohydrodynamic modes and disruption avoidance mechanisms by monitoring instability growth or anticipating mode-locking events. This work is focused on nonlinear Kalman filter based methods for tackling this problem. Similar methods have already proven their merits and have been successfully employed in this scientific domain in applications such as amplitude demodulation for the motional Stark effect diagnostic. In the course of this work, three approaches are described, compared, and discussed using magnetic signals from the Joint European Torus tokamak plasma discharges for benchmarking purposes.

  7. Communication: Four-component density matrix renormalization group

    SciTech Connect

    Knecht, Stefan Reiher, Markus; Legeza, Örs

    2014-01-28

    We present the first implementation of the relativistic quantum chemical two- and four-component density matrix renormalization group algorithm that includes a variational description of scalar-relativistic effects and spin–orbit coupling. Numerical results based on the four-component Dirac–Coulomb Hamiltonian are presented for the standard reference molecule for correlated relativistic benchmarks: thallium hydride.

  8. Delaunay algorithm and principal component analysis for 3D visualization of mitochondrial DNA nucleoids by Biplane FPALM/dSTORM.

    PubMed

    Alán, Lukáš; Špaček, Tomáš; Ježek, Petr

    2016-07-01

    Data segmentation and object rendering is required for localization super-resolution microscopy, fluorescent photoactivation localization microscopy (FPALM), and direct stochastic optical reconstruction microscopy (dSTORM). We developed and validated methods for segmenting objects based on Delaunay triangulation in 3D space, followed by facet culling. We applied them to visualize mitochondrial nucleoids, which confine DNA in complexes with mitochondrial (mt) transcription factor A (TFAM) and gene expression machinery proteins, such as mt single-stranded-DNA-binding protein (mtSSB). Eos2-conjugated TFAM visualized nucleoids in HepG2 cells, which was compared with dSTORM 3D-immunocytochemistry of TFAM, mtSSB, or DNA. The localized fluorophores of FPALM/dSTORM data were segmented using Delaunay triangulation into polyhedron models and by principal component analysis (PCA) into general PCA ellipsoids. The PCA ellipsoids were normalized to the smoothed volume of polyhedrons or by the net unsmoothed Delaunay volume and remodeled into rotational ellipsoids to obtain models, termed DVRE. The most frequent size of ellipsoid nucleoid model imaged via TFAM was 35 × 45 × 95 nm; or 35 × 45 × 75 nm for mtDNA cores; and 25 × 45 × 100 nm for nucleoids imaged via mtSSB. Nucleoids encompassed different point density and wide size ranges, speculatively due to different activity stemming from different TFAM/mtDNA stoichiometry/density. Considering twofold lower axial vs. lateral resolution, only bulky DVRE models with an aspect ratio >3 and tilted toward the xy-plane were considered as two proximal nucleoids, suspicious occurring after division following mtDNA replication. The existence of proximal nucleoids in mtDNA-dSTORM 3D images of mtDNA "doubling"-supported possible direct observations of mt nucleoid division after mtDNA replication.

  9. High order hybrid numerical simulations of two dimensional detonation waves

    NASA Technical Reports Server (NTRS)

    Cai, Wei

    1993-01-01

    In order to study multi-dimensional unstable detonation waves, a high order numerical scheme suitable for calculating the detailed transverse wave structures of multidimensional detonation waves was developed. The numerical algorithm uses a multi-domain approach so different numerical techniques can be applied for different components of detonation waves. The detonation waves are assumed to undergo an irreversible, unimolecular reaction A yields B. Several cases of unstable two dimensional detonation waves are simulated and detailed transverse wave interactions are documented. The numerical results show the importance of resolving the detonation front without excessive numerical viscosity in order to obtain the correct cellular patterns.

  10. Evaluation of Measured and Simulated Turbulent Components of a Snow Cover Energy Balance Model in Order to Refine the Turbulent Transfer Algorithm.

    NASA Astrophysics Data System (ADS)

    Reba, M. L.; Marks, D.; Link, T.; Pomeroy, J.; Winstral, A.

    2007-12-01

    Energy balance models use physically based principles to simulate snow cover accumulation and melt. Snobal, a snow cover energy balance model, uses a flux-profile approach to calculating the turbulent flux (sensible and latent heat flux) components of the energy balance. Historically, validation data for turbulent flux simulations have been difficult to obtain at snow dominated sites characterized by complex terrain and heterogeneous vegetation. Currently, eddy covariance (EC) is the most defensible method available to measure turbulent flux and hence to validate this component of an energy balance model. EC was used to measure sensible and latent heat flux at two sites over three winter seasons (2004, 2005, and 2006). Both sites are located in Reynolds Creek Experimental Watershed in southwestern Idaho, USA and are characterized as semi-arid rangeland. One site is on a wind-exposed ridge with small shrubs and the other is in a wind-protected area in a small aspen stand. EC data were post processed from 10 Hz measurements. The first objective of this work was to compare EC- measured sensible and latent heat flux and sublimation/condensation to Snobal-simulated values. Comparisons were made on several temporal scales, including inter-annual, seasonal and diurnal. The flux- profile method used in Snobal assumes equal roughness lengths for moisture and temperature, and roughness lengths are constant and not a function of stability. Furthermore, there has been extensive work on improving profile function constants that is not considered in the current version of Snobal. Therefore, the second objective of this work was to modify the turbulent flux algorithm in Snobal. Modifications were made to calculate roughness lengths as a function of stability and separately for moisture and temperature. Also, more recent formulations of the profile function constants were incorporated. The third objective was to compare EC-measured sensible and latent heat flux and sublimation

  11. Optimizing the fabrication process and interplay of device components of polymer solar cells using a field-based multiscale solar-cell algorithm.

    PubMed

    Donets, Sergii; Pershin, Anton; Baeurle, Stephan A

    2015-05-14

    Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes

  12. Optimizing the fabrication process and interplay of device components of polymer solar cells using a field-based multiscale solar-cell algorithm

    SciTech Connect

    Donets, Sergii; Pershin, Anton; Baeurle, Stephan A.

    2015-05-14

    Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes

  13. Optimizing the fabrication process and interplay of device components of polymer solar cells using a field-based multiscale solar-cell algorithm

    NASA Astrophysics Data System (ADS)

    Donets, Sergii; Pershin, Anton; Baeurle, Stephan A.

    2015-05-01

    Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes

  14. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  15. Principal component analysis- adaptive neuro-fuzzy inference system modeling and genetic algorithm optimization of adsorption of methylene blue by activated carbon derived from Pistacia khinjuk.

    PubMed

    Ghaedi, M; Ghaedi, A M; Abdi, F; Roosta, M; Vafaei, A; Asghari, A

    2013-10-01

    In the present study, activated carbon (AC) simply derived from Pistacia khinjuk and characterized using different techniques such as SEM and BET analysis. This new adsorbent was used for methylene blue (MB) adsorption. Fitting the experimental equilibrium data to various isotherm models shows the suitability and applicability of the Langmuir model. The adsorption mechanism and rate of processes was investigated by analyzing time dependency data to conventional kinetic models and it was found that adsorption follow the pseudo-second-order kinetic model. Principle component analysis (PCA) has been used for preprocessing of input data and genetic algorithm optimization have been used for prediction of adsorption of methylene blue using activated carbon derived from P. khinjuk. In our laboratory various activated carbon as sole adsorbent or loaded with various nanoparticles was used for removal of many pollutants (Ghaedi et al., 2012). These results indicate that the small amount of proposed adsorbent (1.0g) is applicable for successful removal of MB (RE>98%) in short time (45min) with high adsorption capacity (48-185mgg(-1)).

  16. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1995-01-01

    Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

  17. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  18. Local multiplicative Schwarz algorithms for convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  19. Programmer's guide for LIFE2's rainflow counting algorithm

    SciTech Connect

    Schluter, L.L.

    1991-01-01

    The LIFE2 computer code is a fatique/fracture analysis code that is specialized to the analysis of wind turbine components. The numerical formulation of the code uses a series of cycle count matrices to describe the cyclic stress states imposed upon the turbine. In this formulation, each stress cycle is counted or binsed'' according to the magnitude of its mean stress and alternating stress components and by the operating condition of the turbine. A set of numerical algorithms has been incorporated into the LIFE2 code. These algorithms determine the cycle count matrices for a turbine component using stress-time histories of the imposed stress states. This paper describes the design decisions that were made and explains the implementation of these algorithms using Fortran 77. 7 refs., 7 figs.

  20. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    NASA Astrophysics Data System (ADS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  1. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    SciTech Connect

    Razali, Azhani Mohd Abdullah, Jaafar

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  2. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  3. Determination of the electron energy distribution function in the plasma by means of numerical simulations of multiple harmonic components on a Langmuir probe characteristic—measurements in expanding microwave plasma

    NASA Astrophysics Data System (ADS)

    Jauberteau, J. L.; Jauberteau, I.

    2007-05-01

    A method is proposed to determine the electron energy distribution function (EEDF), which is related to the second derivative of the electrostatic probe characteristic by the Druyvesteyn theory. The method is based on the numerical simulation of the effect induced by a sinusoidal perturbation superimposed on to the dc voltage applied to the probe. This simulation generates a multiple harmonic components' signal over the rough experimental data. Each harmonic component can be isolated by means of finite impulse response filters. Then, the second derivative is deduced from the second harmonic component using the Taylor expansion. The efficiency of the multiple harmonic simulation method is proved first on simple models and second on a typical Langmuir probe characteristic recorded in a plasma-containing argon. This method is used to investigate expanding microwave plasma in the case of two different reactor configurations.

  4. A numerical algorithm to evaluate the transient response for a synchronous scanning streak camera using a time-domain Baum-Liu-Tesche equation

    NASA Astrophysics Data System (ADS)

    Pei, Chengquan; Tian, Jinshou; Wu, Shengli; He, Jiai; Liu, Zhen

    2016-10-01

    The transient response is of great influence on the electromagnetic compatibility of synchronous scanning streak cameras (SSSCs). In this paper we propose a numerical method to evaluate the transient response of the scanning deflection plate (SDP). First, we created a simplified circuit model for the SDP used in an SSSC, and then derived the Baum-Liu-Tesche (BLT) equation in the frequency domain. From the frequency-domain BLT equation, its transient counterpart was derived. These parameters, together with the transient-BLT equation, were used to compute the transient load voltage and load current, and then a novel numerical method to fulfill the continuity equation was used. Several numerical simulations were conducted to verify this proposed method. The computed results were then compared with transient responses obtained by a frequency-domain/fast Fourier transform (FFT) method, and the accordance was excellent for highly conducting cables. The benefit of deriving the BLT equation in the time domain is that it may be used with slight modifications to calculate the transient response and the error can be controlled by a computer program. The result showed that the transient voltage was up to 1000 V and the transient current was approximately 10 A, so some protective measures should be taken to improve the electromagnetic compatibility.

  5. Probabilistic structural analysis algorithm development for computational efficiency

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1991-01-01

    The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.

  6. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  7. GPU Accelerated Event Detection Algorithm

    SciTech Connect

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.

  8. Numerical analysis of the harmonic components of the Bragg wavelength content in spectral responses of apodized fiber Bragg gratings written by means of a phase mask with a variable phase step height.

    PubMed

    Osuch, Tomasz

    2016-02-01

    The influence of the complex interference patterns created by a phase mask with variable diffraction efficiency in apodized fiber Bragg grating (FBGs) formation on their reflectance spectra is studied. The effect of the significant contributions of the zeroth and higher (m>±1) diffraction orders on the Bragg wavelength peak and its harmonic components is analyzed numerically. The results obtained for Gaussian and tanh apodization profiles are compared with similar data calculated for a uniform grating. It is demonstrated that when an apodized FBG is written using a phase mask with variable diffraction efficiency, significant enhancement of the harmonic components and a reduction of the Bragg wavelength peak in the grating spectral response are observed. This is particularly noticeable for the Gaussian apodization profile due to the substantial contributions of phase mask sections with relatively small phase steps in the FBG formation. PMID:26831768

  9. An analysis on changes in reservoir fluid based on numerical simulation of neutron log using a Monte Carlo N-Particle algorithm

    NASA Astrophysics Data System (ADS)

    Ku, B.; Nam, M.

    2012-12-01

    Neutron logging has been widely used to estimate neutron porosity to evaluate formation properties in oil industry. More recently, neutron logging has been highlighted for monitoring the behavior of CO2 injected into reservoir for geological CO2 sequestration. For a better understanding of neutron log interpretation, Monte Carlo N-Particle (MCNP) algorithm is used to illustrate the response of a neutron tool. In order to obtain calibration curves for the neutron tool, neutron responses are simulated in water-filled limestone, sandstone and dolomite formations of various porosities. Since the salinities (concentration of NaCl) of borehole fluid and formation water are important factors for estimating formation porosity, we first compute and analyze neutron responses for brine-filled formations with different porosities. Further, we consider changes in brine saturation of a reservoir due to hydrocarbon production or geological CO2 sequestration to simulate corresponding neutron logging data. As gas saturation decreases, measured neutron porosity confirms gas effects on neutron logging, which is attributed to the fact that gas has slightly smaller number of hydrogen than brine water. In the meantime, increase in CO2 saturation due to CO2 injection reduces measured neutron porosity giving a clue to estimation the CO2 saturation, since the injected CO2 substitute for the brine water. A further analysis on the reduction gives a strategy for estimating CO2 saturation based on time-lapse neutron logging. This strategy can help monitoring not only geological CO2 sequestration but also CO2 flood for enhanced-oil-recovery. Acknowledgements: This work was supported by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2012T100201588). Myung Jin Nam was partially supported by the National Research Foundation of Korea(NRF) grant funded by the Korea

  10. Novel pure component contribution, mean centering of ratio spectra and factor based algorithms for simultaneous resolution and quantification of overlapped spectral signals: An application to recently co-formulated tablets of chlorzoxazone, aceclofenac and paracetamol

    NASA Astrophysics Data System (ADS)

    Toubar, Safaa S.; Hegazy, Maha A.; Elshahed, Mona S.; Helmy, Marwa I.

    2016-06-01

    In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253 nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30 μg mL- 1 for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10 μg mL- 1. The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets.

  11. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  12. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  13. Well-balanced Component-wise Scheme for Shallow Water System

    SciTech Connect

    Louaked, M.; Tounsi, H.

    2010-11-25

    This paper presents a well-balanced numerical scheme for solving free surface flows involving wetting and drying. The proposed algorithm combines a component-wise approach with hydrostatic reconstruction strategy to compute flows over wet or dry surfaces and to satisfy the steady state condition of still water. The robustness of the proposed scheme is verified under several benchmark hydraulic tests.

  14. Efficient sequential and parallel algorithms for record linkage

    PubMed Central

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837

  15. Inverse transport calculations in optical imaging with subspace optimization algorithms

    SciTech Connect

    Ding, Tian Ren, Kui

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.

  16. Numerical quadrature for slab geometry transport algorithms

    SciTech Connect

    Hennart, J.P.; Valle, E. del

    1995-12-31

    In recent papers, a generalized nodal finite element formalism has been presented for virtually all known linear finite difference approximations to the discrete ordinates equations in slab geometry. For a particular angular directions {mu}, the neutron flux {Phi} is approximated by a piecewise function Oh, which over each space interval can be polynomial or quasipolynomial. Here we shall restrict ourselves to the polynomial case. Over each space interval, {Phi} is a polynomial of degree k, interpolating parameters given by in the continuous and discontinuous cases, respectively. The angular flux at the left and right ends and the k`th Legendre moment of {Phi} over the cell considered are represented as.

  17. Optical color image hiding scheme by using Gerchberg-Saxton algorithm in fractional Fourier domain

    NASA Astrophysics Data System (ADS)

    Chen, Hang; Du, Xiaoping; Liu, Zhengjun; Yang, Chengwei

    2015-03-01

    We proposed an optical color image hiding algorithm based on Gerchberg-Saxton retrieval algorithm in fractional Fourier domain. The RGB components of the color image are converted into a scrambled image by using 3D Arnold transform before the hiding operation simultaneously and these changed images are regarded as the amplitude of fractional Fourier spectrum. Subsequently the unknown phase functions in fractional Fourier domain are calculated by the retrieval algorithm, in which the host RBG components are the part of amplitude of the input functions. The 3D Arnold transform is performed with different parameters to enhance the security of the hiding and extracting algorithm. Some numerical simulations are made to test the validity and capability of the proposed color hiding encryption algorithm.

  18. Classification of gasoline data obtained by gas chromatography using a piecewise alignment algorithm combined with feature selection and principal component analysis

    SciTech Connect

    Pierce, Karisa M.; Hope, Janiece L.; Johnson, Kevin J.; Wright, Bob W.; Synovec, Robert E.

    2005-11-25

    A fast and objective chemometric classification method is developed and applied to the analysis of gas chromatography (GC) data from five commercial gasoline samples. The gasoline samples serve as model mixtures, whereas the focus is on the development and demonstration of the classification method. The method is based on objective retention time alignment (referred to as piecewise alignment) coupled with analysis of variance (ANOVA) feature selection prior to classification by principal component analysis (PCA) using optimal parameters. The degree-of-class-separation is used as a metric to objectively optimize the alignment and feature selection parameters using a suitable training set thereby reducing user subjectivity, as well as to indicate the success of the PCA clustering and classification. The degree-of-class-separation is calculated using Euclidean distances between the PCA scores of a subset of the replicate runs from two of the five fuel types, i.e., the training set. The unaligned training set that was directly submitted to PCA had a low degree-of-class-separation (0.4), and the PCA scores plot for the raw training set combined with the raw test set failed to correctly cluster the five sample types. After submitting the training set to piecewise alignment, the degree-of-class-separation increased (1.2), but when the same alignment parameters were applied to the training set combined with the test set, the scores plot clustering still did not yield five distinct groups. Applying feature selection to the unaligned training set increased the degree-of-class-separation (4.8), but chemical variations were still obscured by retention time variation and when the same feature selection conditions were used for the training set combined with the test set, only one of the five fuels was clustered correctly. However, piecewise alignment coupled with feature selection yielded a reasonably optimal degree-of-class-separation for the training set (9.2), and when the

  19. Note on symmetric BCJ numerator

    NASA Astrophysics Data System (ADS)

    Fu, Chih-Hao; Du, Yi-Jian; Feng, Bo

    2014-08-01

    We present an algorithm that leads to BCJ numerators satisfying manifestly the three properties proposed by Broedel and Carrasco in [42]. We explicitly calculate the numerators at 4, 5 and 6-points and show that the relabeling property is generically satisfied.

  20. Robust volume calculations for Constructive Solid Geometry (CSG) components in Monte Carlo transport calculations

    SciTech Connect

    Millman, D. L.; Griesheimer, D. P.; Nease, B. R.; Snoeyink, J.

    2012-07-01

    In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)

  1. The hierarchical algorithms--theory and applications

    NASA Astrophysics Data System (ADS)

    Su, Zheng-Yao

    Monte Carlo simulations are one of the most important numerical techniques for investigating statistical physical systems. Among these systems, spin models are a typical example which also play an essential role in constructing the abstract mechanism for various complex systems. Unfortunately, traditional Monte Carlo algorithms are afflicted with "critical slowing down" near continuous phase transitions and the efficiency of the Monte Carlo simulation goes to zero as the size of the lattice is increased. To combat critical slowing down, a very different type of collective-mode algorithm, in contrast to the traditional single-spin-flipmode, was proposed by Swendsen and Wang in 1987 for Potts spin models. Since then, there has been an explosion of work attempting to understand, improve, or generalize it. In these so-called "cluster" algorithms, clusters of spin are regarded as one template and are updated at each step of the Monte Carlo procedure. In implementing these algorithms the cluster labeling is a major time-consuming bottleneck and is also isomorphic to the problem of computing connected components of an undirected graph seen in other application areas, such as pattern recognition.A number of cluster labeling algorithms for sequential computers have long existed. However, the dynamic irregular nature of clusters complicates the task of finding good parallel algorithms and this is particularly true on SIMD (single-instruction-multiple-data machines. Our design of the Hierarchical Cluster Labeling Algorithm aims at alleviating this problem by building a hierarchical structure on the problem domain and by incorporating local and nonlocal communication schemes. We present an estimate for the computational complexity of cluster labeling and prove the key features of this algorithm (such as lower computational complexity, data locality, and easy implementation) compared with the methods formerly known. In particular, this algorithm can be viewed as a generalized

  2. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  3. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    2002-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  4. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow, and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  5. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  6. Coupled and decoupled algorithms for semiconductor simulation

    NASA Astrophysics Data System (ADS)

    Kerkhoven, T.

    1985-12-01

    Algorithms for the numerical simulation are analyzed by computers of the steady state behavior of MOSFETs. The discretization and linearization of the nonlinear partial differential equations as well as the solution of the linearized systems are treated systematically. Thus we generate equations which do not exceed the floating point representations of modern computers and for which charge is conserved while appropriate maximum principles are preserved. A typical decoupling algorithm of the solution of the system of pde is analyzed as a fixed point mapping T. Bounds exist on the components of the solution and for sufficiently regular boundary geometries higher regularity of the derivatives as well. T is a contraction for sufficiently small variation of the boundary data. It therefore follows that under those conditions the decoupling algorithm coverges to a unique fixed point which is the weak solution to the system of pdes in divergence form. A discrete algorithm which corresponds to a possible computer code is shown to converge if the discretizaion of the pde preserves the regularity properties mentioned above. A stronger convergence result is obtained by employing the higher regularity for enforcing the weak formulations of the pde more strongly. The execution speed of a modification of Newton's method, two versions of a decoupling approach and a new mixed solution algorithm are compared for a range of problems. The asymptotic complexity of the solution of the linear systems is identical for these approaches in the context of sparse direct solvers if the ordering is done in an optimal way.

  7. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  8. Component model reduction via the projection and assembly method

    NASA Technical Reports Server (NTRS)

    Bernard, Douglas E.

    1989-01-01

    The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.

  9. Study on the variable cycle engine modeling techniques based on the component method

    NASA Astrophysics Data System (ADS)

    Zhang, Lihua; Xue, Hui; Bao, Yuhai; Li, Jijun; Yan, Lan

    2016-01-01

    Based on the structure platform of the gas turbine engine, the components of variable cycle engine were simulated by using the component method. The mathematical model of nonlinear equations correspondeing to each component of the gas turbine engine was established. Based on Matlab programming, the nonlinear equations were solved by using Newton-Raphson steady-state algorithm, and the performance of the components for engine was calculated. The numerical simulation results showed that the model bulit can describe the basic performance of the gas turbine engine, which verified the validity of the model.

  10. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  11. A Hybrid Shortest Path Algorithm for Navigation System

    NASA Astrophysics Data System (ADS)

    Cho, Hsun-Jung; Lan, Chien-Lun

    2007-12-01

    Combined with Geographic Information System (GIS) and Global Positioning System (GPS), the vehicle navigation system had become a quite popular product in daily life. A key component of the navigation system is the Shortest Path Algorithm. Navigation in real world must face a network consists of tens of thousands nodes and links, and even more. Under the limited computation capability of vehicle navigation equipment, it is difficult to satisfy the realtime response requirement that user expected. Hence, this study focused on shortest path algorithm that enhances the computation speed with less memory requirement. Several well-known algorithms such as Dijkstra, A* and hierarchical concepts were integrated to build hybrid algorithms that reduce searching space and improve searching speed. Numerical examples were conducted on Taiwan highway network that consists of more than four hundred thousands of links and nearly three hundred thousands of nodes. This real network was divided into two connected sub-networks (layers). The upper layer is constructed by freeways and expressways; the lower layer is constructed by local networks. Test origin-destination pairs were chosen randomly and divided into three distance categories; short, medium and long distances. The evaluation of outcome is judged by actual length and travel time. The numerical example reveals that the hybrid algorithm proposed by this research might be tens of thousands times faster than traditional Dijkstra algorithm; the memory requirement of the hybrid algorithm is also much smaller than the tradition algorithm. This outcome shows that this proposed algorithm would have an advantage over vehicle navigation system.

  12. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  13. Frontiers in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Evans, Charles R.; Finn, Lee S.; Hobill, David W.

    2011-06-01

    Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics

  14. Quadrature component analysis of interferograms with random phase shifts

    NASA Astrophysics Data System (ADS)

    Xu, Jiancheng; Chen, Zhao

    2014-08-01

    Quadrature component analysis (QCA) is an effective method for analyzing the interferograms if the phase shifts are uniformly distributed in the [0, 2π] range. However, it is hard to meet this requirement in practical applications, so a parameter named the non-orthogonal degree (NOD) is proposed to indicate the degree when the phase shifts are not well distributed. We analyze the relation between the parameter of NOD and the accuracy of the QCA algorithm by numerical simulation. By using the parameter of NOD, the relation between the distribution of the phase shift and the accuracy of the QCA algorithm is obtained. The relation is discussed and verified by numerical simulations and experiments.

  15. Interpolation algorithms for machine tools

    SciTech Connect

    Burleson, R.R.

    1981-08-01

    There are three types of interpolation algorithms presently used in most numerical control systems: digital differential analyzer, pulse-rate multiplier, and binary-rate multiplier. A method for higher order interpolation is in the experimental stages. The trends point toward the use of high-speed micrprocessors to perform these interpolation algorithms.

  16. Computational Algorithms for Device-Circuit Coupling

    SciTech Connect

    KEITER, ERIC R.; HUTCHINSON, SCOTT A.; HOEKSTRA, ROBERT J.; RANKIN, ERIC LAMONT; RUSSO, THOMAS V.; WATERS, LON J.

    2003-01-01

    Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. Similarly, device-scale simulation tools (e.g., DaVinci) are commonly used in the design of individual semiconductor components. Some problems, such as single-event upset (SEU), require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (sometimes conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the partial differential equations (PDE) device, while optimizing the numerics for both.

  17. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  18. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  19. CO component estimation based on the independent component analysis

    SciTech Connect

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  20. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  1. Methods of information theory and algorithmic complexity for network biology.

    PubMed

    Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper

    2016-03-01

    We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdös-Rényi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity.

  2. Cold-standby redundancy allocation problem with degrading components

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Xiong, Junlin; Xie, Min

    2015-11-01

    Components in cold-standby state are usually assumed to be as good as new when they are activated. However, even in a standby environment, the components will suffer from performance degradation. This article presents a study of a redundancy allocation problem (RAP) for cold-standby systems with degrading components. The objective of the RAP is to determine an optimal design configuration of components to maximize system reliability subject to system resource constraints (e.g. cost, weight). As in most cases, it is not possible to obtain a closed-form expression for this problem, and hence, an approximated objective function is presented. A genetic algorithm with dual mutation is developed to solve such a constrained optimization problem. Finally, a numerical example is given to illustrate the proposed solution methodology.

  3. Numerical nebulae

    NASA Astrophysics Data System (ADS)

    Rijkhorst, Erik-Jan

    2005-12-01

    The late stages of evolution of stars like our Sun are dominated by several episodes of violent mass loss. Space based observations of the resulting objects, known as Planetary Nebulae, show a bewildering array of highly symmetric shapes. The interplay between gasdynamics and radiative processes determines the morphological outcome of these objects, and numerical models for astrophysical gasdynamics have to incorporate these effects. This thesis presents new numerical techniques for carrying out high-resolution three-dimensional radiation hydrodynamical simulations. Such calculations require parallelization of computer codes, and the use of state-of-the-art supercomputer technology. Numerical models in the context of the shaping of Planetary Nebulae are presented, providing insight into their origin and fate.

  4. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  5. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  6. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  7. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  8. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  9. COMPARING NUMERICAL METHODS FOR ISOTHERMAL MAGNETIZED SUPERSONIC TURBULENCE

    SciTech Connect

    Kritsuk, Alexei G.; Collins, David; Norman, Michael L.; Xu Hao E-mail: dccollins@lanl.gov

    2011-08-10

    Many astrophysical applications involve magnetized turbulent flows with shock waves. Ab initio star formation simulations require a robust representation of supersonic turbulence in molecular clouds on a wide range of scales imposing stringent demands on the quality of numerical algorithms. We employ simulations of supersonic super-Alfvenic turbulence decay as a benchmark test problem to assess and compare the performance of nine popular astrophysical MHD methods actively used to model star formation. The set of nine codes includes: ENZO, FLASH, KT-MHD, LL-MHD, PLUTO, PPML, RAMSES, STAGGER, and ZEUS. These applications employ a variety of numerical approaches, including both split and unsplit, finite difference and finite volume, divergence preserving and divergence cleaning, a variety of Riemann solvers, and a range of spatial reconstruction and time integration techniques. We present a comprehensive set of statistical measures designed to quantify the effects of numerical dissipation in these MHD solvers. We compare power spectra for basic fields to determine the effective spectral bandwidth of the methods and rank them based on their relative effective Reynolds numbers. We also compare numerical dissipation for solenoidal and dilatational velocity components to check for possible impacts of the numerics on small-scale density statistics. Finally, we discuss the convergence of various characteristics for the turbulence decay test and the impact of various components of numerical schemes on the accuracy of solutions. The nine codes gave qualitatively the same results, implying that they are all performing reasonably well and are useful for scientific applications. We show that the best performing codes employ a consistently high order of accuracy for spatial reconstruction of the evolved fields, transverse gradient interpolation, conservation law update step, and Lorentz force computation. The best results are achieved with divergence-free evolution of the

  10. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  11. Numerical construction of the Hill functions.

    NASA Technical Reports Server (NTRS)

    Segethova, J.

    1972-01-01

    As an aid in the numerical construction of Hill functions and their derivatives, an algorithm using local coordinates and an expansion in Legendre polynomials is proposed. The algorithm is shown to possess sufficient stability, and the orthogonality of the Legendre polynomials simplifies the computation when the Ritz-Galerkin technique is used.

  12. Robust and discriminating method for face recognition based on correlation technique and independent component analysis model.

    PubMed

    Alfalou, A; Brosseau, C

    2011-03-01

    We demonstrate a novel technique for face recognition. Our approach relies on the performances of a strongly discriminating optical correlation method along with the robustness of the independent component analysis (ICA) model. Simulations were performed to illustrate how this algorithm can identify a face with images from the Pointing Head Pose Image Database. While maintaining algorithmic simplicity, this approach based on ICA representation significantly increases the true recognition rate compared to that obtained using our previously developed all-numerical ICA identity recognition method and another method based on optical correlation and a standard composite filter. PMID:21368935

  13. An algorithm for control volume analysis of cryogenic systems

    NASA Astrophysics Data System (ADS)

    Stanton, Michael B.

    1989-06-01

    This thesis presents an algorithm suitable for numerical analysis of cryogenic refrigeration systems. Preliminary design of a cryogenic system commences with a number of decoupling assumptions with regard to the process variables of heat and work transfer (e.g., work input rate, heat loading rates) and state variables (pinch points, momentum losses). Making preliminary performance estimations minimizes the effect of component interactions which is inconsistent with the intent of analysis. A more useful design and analysis tool is one in which no restrictions are applied to the system - interactions become absolutely coupled and governed by the equilibrium state variables. Such a model would require consideration of hardware specifications and performance data and information with respect to the thermal environment. Model output would consist of the independent thermodynamic state variables from which process variables and performance parameters may be computed. This model will have a framework compatible for numerical solution on a digital computer so that it may be interfaced with graphic symbology for user interaction. This algorithm approaches cryogenic problems in a highly-coupled state-dependent manner. The framework for this algorithm revolves around the revolutionary thermodynamic solution technique for computer Aided Thermodynamics (CAT). Fundamental differences exist between the Control Volume (CV) algorithm and CAT, which will be discussed where appropriate.

  14. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. I - The dynamics of time discretization and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1991-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  15. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. Part 1: The ODE connection and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1990-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  16. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  17. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  18. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  19. Force-Control Algorithm for Surface Sampling

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Quadrelli, Marco B.; Phan, Linh

    2008-01-01

    A G-FCON algorithm is designed for small-body surface sampling. It has a linearization component and a feedback component to enhance performance. The algorithm regulates the contact force between the tip of a robotic arm attached to a spacecraft and a surface during sampling.

  20. Numerical wave propagation in ImageJ.

    PubMed

    Piedrahita-Quintero, Pablo; Castañeda, Raul; Garcia-Sucerquia, Jorge

    2015-07-20

    An ImageJ plugin for numerical wave propagation is presented. The plugin provides ImageJ, the well-known software for image processing, with the capability of computing numerical wave propagation by the use of angular spectrum, Fresnel, and Fresnel-Bluestein algorithms. The plugin enables numerical wave propagation within the robust environment provided by the complete set of built-in tools for image processing available in ImageJ. The plugin can be used for teaching and research purposes. We illustrate its use to numerically recreate Poisson's spot and Babinet's principle, and in the numerical reconstruction of digitally recorded holograms from millimeter-sized and pure phase microscopic objects.

  1. Kernel Near Principal Component Analysis

    SciTech Connect

    MARTIN, SHAWN B.

    2002-07-01

    We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.

  2. Seismic-acoustic finite-difference wave propagation algorithm.

    SciTech Connect

    Preston, Leiph; Aldridge, David Franklin

    2010-10-01

    An efficient numerical algorithm for treating earth models composed of fluid and solid portions is obtained via straightforward modifications to a 3D time-domain finite-difference algorithm for simulating isotropic elastic wave propagation.

  3. Spurious Numerical Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1995-01-01

    Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.

  4. Ab initio two-component Ehrenfest dynamics

    SciTech Connect

    Ding, Feizhi; Goings, Joshua J.; Liu, Hongbin; Lingerfelt, David B.; Li, Xiaosong

    2015-09-21

    We present an ab initio two-component Ehrenfest-based mixed quantum/classical molecular dynamics method to describe the effect of nuclear motion on the electron spin dynamics (and vice versa) in molecular systems. The two-component time-dependent non-collinear density functional theory is used for the propagation of spin-polarized electrons while the nuclei are treated classically. We use a three-time-step algorithm for the numerical integration of the coupled equations of motion, namely, the velocity Verlet for nuclear motion, the nuclear-position-dependent midpoint Fock update, and the modified midpoint and unitary transformation method for electronic propagation. As a test case, the method is applied to the dissociation of H{sub 2} and O{sub 2}. In contrast to conventional Ehrenfest dynamics, this two-component approach provides a first principles description of the dynamics of non-collinear (e.g., spin-frustrated) magnetic materials, as well as the proper description of spin-state crossover, spin-rotation, and spin-flip dynamics by relaxing the constraint on spin configuration. This method also holds potential for applications to spin transport in molecular or even nanoscale magnetic devices.

  5. Ab initio two-component Ehrenfest dynamics

    NASA Astrophysics Data System (ADS)

    Ding, Feizhi; Goings, Joshua J.; Liu, Hongbin; Lingerfelt, David B.; Li, Xiaosong

    2015-09-01

    We present an ab initio two-component Ehrenfest-based mixed quantum/classical molecular dynamics method to describe the effect of nuclear motion on the electron spin dynamics (and vice versa) in molecular systems. The two-component time-dependent non-collinear density functional theory is used for the propagation of spin-polarized electrons while the nuclei are treated classically. We use a three-time-step algorithm for the numerical integration of the coupled equations of motion, namely, the velocity Verlet for nuclear motion, the nuclear-position-dependent midpoint Fock update, and the modified midpoint and unitary transformation method for electronic propagation. As a test case, the method is applied to the dissociation of H2 and O2. In contrast to conventional Ehrenfest dynamics, this two-component approach provides a first principles description of the dynamics of non-collinear (e.g., spin-frustrated) magnetic materials, as well as the proper description of spin-state crossover, spin-rotation, and spin-flip dynamics by relaxing the constraint on spin configuration. This method also holds potential for applications to spin transport in molecular or even nanoscale magnetic devices.

  6. Brain components

    MedlinePlus

    ... 3 major components of the brain are the cerebrum, cerebellum, and brain stem. The cerebrum is divided into left and right hemispheres, each ... gray matter) is the outside portion of the cerebrum and provides us with functions associated with conscious ...

  7. Multi-component Cahn-Hilliard system with different boundary conditions in complex domains

    NASA Astrophysics Data System (ADS)

    Li, Yibao; Choi, Jung-Il; Kim, Junseok

    2016-10-01

    We propose an efficient phase-field model for multi-component Cahn-Hilliard (CH) systems in complex domains. The original multi-component Cahn-Hilliard system with a fixed phase is modified in order to make it suitable for complex domains in the Cartesian grid, along with contact angle or no mass flow boundary conditions on the complex boundaries. The proposed method uses a practically unconditionally gradient stable nonlinear splitting numerical scheme. Further, a nonlinear full approximation storage multigrid algorithm is used for solving semi-implicit formulations of the multi-component CH system, incorporated with an adaptive mesh refinement technique. The robustness of the proposed method is validated through various numerical simulations including multi-phase separations via spinodal decomposition, equilibrium contact angle problems, and multi-phase flows with a background velocity field in complex domains.

  8. Remote sensing image fusion method based on multiscale morphological component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Jindong; Ni, Mengying; Zhang, Yanjie; Tong, Xiangrong; Zheng, Qiang; Liu, Jinglei

    2016-04-01

    A remote sensing image (RSI) fusion method based on multiscale morphological component analysis (m-MCA) is presented. Our contribution describes a new multiscale sparse image decomposition algorithm called m-MCA, which we apply to RSI fusion. Building on MCA, m-MCA combines curvelet transform bases and local discrete cosine transform bases to build a multiscale decomposition dictionary, and controls the entries of the dictionary to decompose the image into texture components and cartoon components with different scales. The effective scale texture component of high-resolution RSI and the cartoon component of multispectral RSI are selected to reconstruct the fusion image. Compared with state-of-the-art fusion methods, the proposed fusion method obtains higher spatial resolution and lower spectral distortion with reduced computation load in numerical experiments.

  9. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  10. Cuba: Multidimensional numerical integration library

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  11. A vector-multiplication dominated parallel algorithm for the computation of real eigenvalue spectra

    NASA Astrophysics Data System (ADS)

    Clint, M.

    1982-06-01

    In order to exploit effectively the power of array and vector processors for the numerical solution of linear algebraic problems it is desirable to express algorithms principally in terms of vector and matrix operations. Algorithms which manipulate vectors and matrices at component level are best suited for execution on single processor hardware. Often, however, it is difficult, if not impossible, to construct efficient versions of such algorithms which are suitable foe execution on parallwl hardware. A method for computing the eigenvalues of real unsymmetric matrices with real eigenvalue spectra is presented. The method is an extension of the one described in ref. [1]. The algorithm makes heavy use of vector inner product evaluations. The manipulation of individual components of vectors and matrices is kept to a minimum. Essentially, the method involves the construction of a sequence of biorthogonal transformation matrices the combined effect of which is to diagonalise the matrix. The eigenvalues of the matrix are diagonal elements of the final diagonalised form. If the eigenvectors of the matrix are also required the algorithm may be extended in a straightforward way. The effectiveness of the algorithm is demonstrated by an application of sequential version to several small matrices and some comments are made about the time complexity of the parallel version.

  12. Semi-blind signal extraction for communication signals by combining independent component analysis and spatial constraints.

    PubMed

    Wang, Xiang; Huang, Zhitao; Zhou, Yiyu

    2012-01-01

    Signal of interest (SOI) extraction is a vital issue in communication signal processing. In this paper, we propose two novel iterative algorithms for extracting SOIs from instantaneous mixtures, which explores the spatial constraint corresponding to the Directions of Arrival (DOAs) of the SOIs as a priori information into the constrained Independent Component Analysis (cICA) framework. The first algorithm utilizes the spatial constraint to form a new constrained optimization problem under the previous cICA framework which requires various user parameters, i.e., Lagrange parameter and threshold measuring the accuracy degree of the spatial constraint, while the second algorithm incorporates the spatial constraints to select specific initialization of extracting vectors. The major difference between the two novel algorithms is that the former incorporates the prior information into the learning process of the iterative algorithm and the latter utilizes the prior information to select the specific initialization vector. Therefore, no extra parameters are necessary in the learning process, which makes the algorithm simpler and more reliable and helps to improve the speed of extraction. Meanwhile, the convergence condition for the spatial constraints is analyzed. Compared with the conventional techniques, i.e., MVDR, numerical simulation results demonstrate the effectiveness, robustness and higher performance of the proposed algorithms. PMID:23012531

  13. Semi-Blind Signal Extraction for Communication Signals by Combining Independent Component Analysis and Spatial Constraints

    PubMed Central

    Wang, Xiang; Huang, Zhitao; Zhou, Yiyu

    2012-01-01

    Signal of interest (SOI) extraction is a vital issue in communication signal processing. In this paper, we propose two novel iterative algorithms for extracting SOIs from instantaneous mixtures, which explores the spatial constraint corresponding to the Directions of Arrival (DOAs) of the SOIs as a priori information into the constrained Independent Component Analysis (cICA) framework. The first algorithm utilizes the spatial constraint to form a new constrained optimization problem under the previous cICA framework which requires various user parameters, i.e., Lagrange parameter and threshold measuring the accuracy degree of the spatial constraint, while the second algorithm incorporates the spatial constraints to select specific initialization of extracting vectors. The major difference between the two novel algorithms is that the former incorporates the prior information into the learning process of the iterative algorithm and the latter utilizes the prior information to select the specific initialization vector. Therefore, no extra parameters are necessary in the learning process, which makes the algorithm simpler and more reliable and helps to improve the speed of extraction. Meanwhile, the convergence condition for the spatial constraints is analyzed. Compared with the conventional techniques, i.e., MVDR, numerical simulation results demonstrate the effectiveness, robustness and higher performance of the proposed algorithms. PMID:23012531

  14. New formulations of monotonically convergent quantum control algorithms

    NASA Astrophysics Data System (ADS)

    Maday, Yvon; Turinici, Gabriel

    2003-05-01

    Most of the numerical simulation in quantum (bilinear) control have used one of the monotonically convergent algorithms of Krotov (introduced by Tannor et al.) or of Zhu and Rabitz. However, until now no explicit relationship has been revealed between the two algorithms in order to understand their common properties. Within this framework, we propose in this paper a unified formulation that comprises both algorithms and that extends to a new class of monotonically convergent algorithms. Numerical results show that the newly derived algorithms behave as well as (and sometimes better than) the well-known algorithms cited above.

  15. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  16. Scientific Software Component Technology

    SciTech Connect

    Kohn, S.; Dykman, N.; Kumfert, G.; Smolinski, B.

    2000-02-16

    We are developing new software component technology for high-performance parallel scientific computing to address issues of complexity, re-use, and interoperability for laboratory software. Component technology enables cross-project code re-use, reduces software development costs, and provides additional simulation capabilities for massively parallel laboratory application codes. The success of our approach will be measured by its impact on DOE mathematical and scientific software efforts. Thus, we are collaborating closely with library developers and application scientists in the Common Component Architecture forum, the Equation Solver Interface forum, and other DOE mathematical software groups to gather requirements, write and adopt a variety of design specifications, and develop demonstration projects to validate our approach. Numerical simulation is essential to the science mission at the laboratory. However, it is becoming increasingly difficult to manage the complexity of modern simulation software. Computational scientists develop complex, three-dimensional, massively parallel, full-physics simulations that require the integration of diverse software packages written by outside development teams. Currently, the integration of a new software package, such as a new linear solver library, can require several months of effort. Current industry component technologies such as CORBA, JavaBeans, and COM have all been used successfully in the business domain to reduce software development costs and increase software quality. However, these existing industry component infrastructures will not scale to support massively parallel applications in science and engineering. In particular, they do not address issues related to high-performance parallel computing on ASCI-class machines, such as fast in-process connections between components, language interoperability for scientific languages such as Fortran, parallel data redistribution between components, and massively

  17. Numerical simulation of direct-contact evaporation of a drop rising in a hot, less volatile immiscible liquid of higher density -- Possibilities and limits of the SOLA-VOF/CSF algorithm

    SciTech Connect

    Wohak, M.G.; Beer, H.

    1998-05-08

    A contribution toward the full numerical simulation of direct-contact evaporation of a drop rising in a hot, immiscible and less volatile liquid of higher density is presented. Based on a fixed-grid Eulerian description, the classical SOLA-VOF method is largely extended to incorporate, for example, three incompressible fluids and liquid-vapor phase change. The thorough validation and assessment process covers several benchmark simulations, some which are presented, documenting the multipurpose value of the new code. The direct-contact evaporation simulations reveal severe numerical problems that are closely related to the fixed-grid Euler formulation. As a consequence, the comparison to experiments have to be limited to the initial stage. Potential applications using several design variations can be found in waste heat recovery and reactor cooling. Furthermore, direct contact evaporators may be used in such geothermal power plants where the brines cannot be directly fed into a turbine either because of a high salt load causing severe fouling and corrosion or because of low steam fraction.

  18. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  19. Derivative Free Gradient Projection Algorithms for Rotation

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2004-01-01

    A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…

  20. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  1. Hyperfrequency components

    NASA Astrophysics Data System (ADS)

    1994-09-01

    The document has a collection of 19 papers (11 on technologies, 8 on applications) by 26 authors and coauthors. Technological topics include: evolution from conventional HEMT's double heterojunction and planar types of pseudomorphic HEMT's; MMIC R&D and production aspects for very-low-noise, low-power, and very-low-noise, high-power applications; hyperfrequency CAD tools; parametric measurements of hyperfrequency components on plug-in cards for design and in-process testing uses; design of Class B power amplifiers and millimetric-wave, bigrid-transistor mixers, exemplifying combined use of three major types of physical simulation in electrical modeling of microwave components; FET's for power amplification at up to 110 GHz; production, characterization, and nonlinear applications of resonant tunnel diodes. Applications topics include: development of active modules for major European programs; tubes versus solid-state components in hyperfrequency applications; status and potentialities of national and international cooperative R&D on MMIC's and CAD of hyperfrequency circuitry; attainable performance levels in multifunction MMIC applications; state of the art relative of MESFET power amplifiers (Bands S, C, X, Ku); creating a hyperfrequency functions library, of parametrizable reference cells or macrocells; and design of a single-stage, low-noise, band-W amplifier toward development of a three-stage amplifier.

  2. Component separations.

    PubMed

    Heller, Lior; McNichols, Colton H; Ramirez, Oscar M

    2012-02-01

    Component separation is a technique used to provide adequate coverage for midline abdominal wall defects such as a large ventral hernia. This surgical technique is based on subcutaneous lateral dissection, fasciotomy lateral to the rectus abdominis muscle, and dissection on the plane between external and internal oblique muscles with medial advancement of the block that includes the rectus muscle and its fascia. This release allows for medial advancement of the fascia and closure of up to 20-cm wide defects in the midline area. Since its original description, components separation technique underwent multiple modifications with the ultimate goal to decrease the morbidity associated with the traditional procedure. The extensive subcutaneous lateral dissection had been associated with ischemia of the midline skin edges, wound dehiscence, infection, and seroma. Although the current trend is to proceed with minimally invasive component separation and to reinforce the fascia with mesh, the basic principles of the techniques as described by Ramirez et al in 1990 have not changed over the years. Surgeons who deal with the management of abdominal wall defects are highly encouraged to include this technique in their collection of treatment options.

  3. Operator induced multigrid algorithms using semirefinement

    NASA Technical Reports Server (NTRS)

    Decker, Naomi; Vanrosendale, John

    1989-01-01

    A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two and three dimensional model problems are presented, together with a two level analysis explaining these results.

  4. Operator induced multigrid algorithms using semirefinement

    NASA Technical Reports Server (NTRS)

    Decker, Naomi Henderson; Van Rosendale, John

    1989-01-01

    A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two- and three-dimensional model problems are presented, together with a two level analysis explaining these results.

  5. An improved algorithm for geocentric to geodetic coordinate conversion

    SciTech Connect

    Toms, R.

    1996-02-01

    The problem of performing transformations from geocentric to geodetic coordinates has received an inordinate amount of attention in the literature. Numerous approximate methods have been published. Almost none of the publications address the issue of efficiency and in most cases there is a paucity of error analysis. Recently there has been a surge of interest in this problem aimed at developing more efficient methods for real time applications such as DIS. Iterative algorithms have been proposed that are not of optimal efficiency, address only one error component and require a small but uncertain number of relatively expensive iterations for convergence. In a recent paper published by the author a new algorithm was proposed for the transformation of geocentric to geodetic coordinates. The new algorithm was tested at the Visual Systems Laboratory at the Institute for Simulation and Training, the University of Central Florida, and found to be 30 percent faster than the best previously published algorithm. In this paper further improvements are made in terms of efficiency. For completeness and to make this paper more readable, it was decided to revise the previous paper and to publish it as a new report. The introduction describes the improvements in more detail.

  6. Scientific Component Technology Initiative

    SciTech Connect

    Kohn, S; Bosl, B; Dahlgren, T; Kumfert, G; Smith, S

    2003-02-07

    The laboratory has invested a significant amount of resources towards the development of high-performance scientific simulation software, including numerical libraries, visualization, steering, software frameworks, and physics packages. Unfortunately, because this software was not designed for interoperability and re-use, it is often difficult to share these sophisticated software packages among applications due to differences in implementation language, programming style, or calling interfaces. This LDRD Strategic Initiative investigated and developed software component technology for high-performance parallel scientific computing to address problems of complexity, re-use, and interoperability for laboratory software. Component technology is an extension of scripting and object-oriented software development techniques that specifically focuses on the needs of software interoperability. Component approaches based on CORBA, COM, and Java technologies are widely used in industry; however, they do not support massively parallel applications in science and engineering. Our research focused on the unique requirements of scientific computing on ASCI-class machines, such as fast in-process connections among components, language interoperability for scientific languages, and data distribution support for massively parallel SPMD components.

  7. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  8. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  9. Numerical taxonomy on data: Experimental results

    SciTech Connect

    Cohen, J.; Farach, M.

    1997-12-01

    The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.

  10. Are light δ13C diamonds derived from preserved primordial heterogeneity or subducted organic carbon? Using numerical modelling of multi-component mass balanced mixing of stable isotopes

    NASA Astrophysics Data System (ADS)

    Mikhail, S.; Jones, A. P.; Robinson, S.; Milledge, H. J.; Verchovsky, A. B.

    2009-04-01

    During the subduction of oceanic crust light volatile elements such as S, C and H are recycled into the upper mantle wedge via slab dehydration and partial melting of oceanic lithosphere. This is evident as arc magmas have higher concentrations of SO2, CO2 and H2O than mid-ocean ridge basalts (Wallace, 2005). It is also calculated that 50% of the carbon and >70% of the sulphur subducted is returned to the earth's deep mantle (Wallace, 2005). This work is testing the notion that the subducted organic carbon is a possible source of growth medium for diamonds. Mantle materials display an interesting bimodality in carbon isotopes with a large peak demonstrating the mean mantle value of ~ -5 ‰ and a smaller peak consistent with organic carbon at ~ -25‰ (Deines, 2001). The source of the bimodality remains unresolved with the main theories being; subducted organic carbon, preserved primordial heterogeneity and the existence of a HPHT fractionation process (for a review see Cartigny, 2005). To test the idea that such organic values of d13C in diamond (ranging from -11 to -37‰) are derived from subducted organic carbon it is essential to compare the d13C values in diamond to other isotopic systems, such as the values for d15N in diamond, as well as values for d34S and d18O in associated syngenic mineral inclusions. We have calculated the percentage of organic C-O-N-S in sediments relative to mean mantle values for d13C, d15N, d34S and d18O required to produce the observed isotopic ratios found in natural diamonds and syngenic mineral inclusions. This was done by way of multi-component mass balanced mixing of stable isotopes between sedimentary, organic and mantle materials of varying measured isotope compositions. References: Cartigny, P .2005. Elements 1, 79-84 Deines, P. 2001. Earth Science Reviews 58, 247-278 Wallace, P.J. 2005. Journal of Volcanology and Geothermal Research 140, 217- 240

  11. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-09-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  12. Propagation of numerical noise in particle-in-cell tracking

    NASA Astrophysics Data System (ADS)

    Kesting, Frederik; Franchetti, Giuliano

    2015-11-01

    Particle-in-cell (PIC) is the most used algorithm to perform self-consistent tracking of intense charged particle beams. It is based on depositing macroparticles on a grid, and subsequently solving on it the Poisson equation. It is well known that PIC algorithms occupy intrinsic limitations as they introduce numerical noise. Although not significant for short-term tracking, this becomes important in simulations for circular machines over millions of turns as it may induce artificial diffusion of the beam. In this work, we present a modeling of numerical noise induced by PIC algorithms, and discuss its influence on particle dynamics. The combined effect of particle tracking and noise created by PIC algorithms leads to correlated or decorrelated numerical noise. For decorrelated numerical noise we derive a scaling law for the simulation parameters, allowing an estimate of artificial emittance growth. Lastly, the effect of correlated numerical noise is discussed, and a mitigation strategy is proposed.

  13. In Praise of Numerical Computation

    NASA Astrophysics Data System (ADS)

    Yap, Chee K.

    Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.

  14. Numerical vorticity creation based on impulse conservation.

    PubMed Central

    Summers, D M; Chorin, A J

    1996-01-01

    The problem of creating solenoidal vortex elements to satisfy no-slip boundary conditions in Lagrangian numerical vortex methods is solved through the use of impulse elements at walls and their subsequent conversion to vortex loops. The algorithm is not uniquely defined, due to the gauge freedom in the definition of impulse; the numerically optimal choice of gauge remains to be determined. Two different choices are discussed, and an application to flow past a sphere is sketched. PMID:11607636

  15. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  16. An algorithm for the automatic synchronization of Omega receivers

    NASA Technical Reports Server (NTRS)

    Stonestreet, W. M.; Marzetta, T. L.

    1977-01-01

    The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.

  17. An immersed boundary method for imposing solid wall conditions in lattice Boltzmann solvers for single- and multi-component fluid flows

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Favier, Julien; D'Ortona, Umberto; Poncet, Sébastien

    2014-11-01

    In this work, one proposes an immersed boundary-lattice Boltzmann coupled algorithm to solve single- and multi-component fluid flows, in the presence of fixed or moving solid boundaries. The prescribed motion of immersed boundaries is imposed by adding a body force term in the lattice Boltzmann model, which is obtained from the macroscopic fluid velocity definition interpolated at the Lagrangian solid points. Numerical validation test cases show that the proposed numerical solver is second-order accurate. Furthermore, the Shan-Chen's lattice Boltzmann model is applied for multi-component fluid flows, and a special focus is given to the treatment of different wetting properties of fixed walls. The capability of the new numerical solver is finally evaluated by simulating a cluster of moving cilia in a two-component fluid flow.

  18. Manufacturing complex silica aerogel target components

    SciTech Connect

    Defriend Obrey, Kimberly Ann; Day, Robert D; Espinoza, Brent F; Hatch, Doug; Patterson, Brian M; Feng, Shihai

    2008-01-01

    Aerogel is a material used in numerous components in High Energy Density Physics targets. In the past these components were molded into the proper shapes. Artifacts left in the parts from the molding process, such as contour irregularities from shrinkage and density gradients caused by the skin, have caused LANL to pursue machining as a way to make the components.

  19. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    NASA Astrophysics Data System (ADS)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  20. Disruptive Innovation in Numerical Hydrodynamics

    SciTech Connect

    Waltz, Jacob I.

    2012-09-06

    We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.

  1. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  2. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  3. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  4. Dynamic Programming Algorithm vs. Genetic Algorithm: Which is Faster?

    NASA Astrophysics Data System (ADS)

    Petković, Dušan

    The article compares two different approaches for the optimization problem of large join queries (LJQs). Almost all commercial database systems use a form of the dynamic programming algorithm to solve the ordering of join operations for large join queries, i.e. joins with more than dozen join operations. The property of the dynamic programming algorithm is that the execution time increases significantly in the case, where the number of join operations in a query is large. Genetic algorithms (GAs), as a data mining technique, have been shown as a promising technique in solving the ordering of join operations in LJQs. Using the existing implementation of GA, we compare the dynamic programming algorithm implemented in commercial database systems with the corresponding GA module. Our results show that the use of a genetic algorithm is a better solution for optimization of large join queries, i.e., that such a technique outperforms the implementations of the dynamic programming algorithm in conventional query optimization components for very large join queries.

  5. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  6. Numerical methods: Analytical benchmarking in transport theory

    SciTech Connect

    Ganapol, B.D. )

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered.

  7. Ten years of Nature Physics: Numerical models come of age

    NASA Astrophysics Data System (ADS)

    Gull, E.; Millis, A. J.

    2015-10-01

    When Nature Physics celebrated 20 years of high-temperature superconductors, numerical approaches were on the periphery. Since then, new ideas implemented in new algorithms are leading to new insights.

  8. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  9. Born approximation, scattering, and algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun

    2015-05-01

    In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

  10. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  11. FBP Algorithms for Attenuated Fan-Beam Projections

    PubMed Central

    You, Jiangsheng; Zeng, Gengsheng L.; Liang, Zhengrong

    2005-01-01

    A filtered backprojection (FBP) reconstruction algorithm for attenuated fan-beam projections has been derived based on Novikov’s inversion formula. The derivation uses a common transformation between parallel-beam and fan-beam coordinates. The filtering is shift-invariant. Numerical evaluation of the FBP algorithm is presented as well. As a special application, we also present a shift-invariant FBP algorithm for fan-beam SPECT reconstruction with uniform attenuation compensation. Several other fan-beam reconstruction algorithms are also discussed. In the attenuation-free case, our algorithm reduces to the conventional fan-beam FBP reconstruction algorithm. PMID:16570111

  12. NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.

    SciTech Connect

    LUCCIO, A.; D'IMPERIO, N.; MALITSKY, N.

    2005-09-12

    Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.

  13. Numerical recipes for mold filling simulation

    SciTech Connect

    Kothe, D.; Juric, D.; Lam, K.; Lally, B.

    1998-07-01

    Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.

  14. Fast proximity algorithm for MAP ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng

    2012-03-01

    We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.

  15. An efficient algorithm for geocentric to geodetic coordinate conversion

    SciTech Connect

    Toms, R.M.

    1995-09-01

    The problem of performing transformations from geocentric to geodetic coordinates has received an inordinate amount of attention in the literature. Numerous approximate methods have been published. Almost none of the publications address the issue of efficiency and in most cases there is a paucity of error analysis. Recently there has been a surge of interest in this problem aimed at developing more efficient methods for real time applications such as DIS. Iterative algorithms have been proposed that are not of optimal efficiency, address only one error component and require a small but uncertain number of relatively expensive iterations for convergence. In this paper a well known rapidly convergent iterative approach is modified to eliminate intervening trigonometric function evaluations. A total error metric is defined that accounts for both angular and altitude errors. The initial guess is optimized to minimize the error for one iteration. The resulting algorithm yields transformations correct to one centimeter for altitudes out to one million kilometers. Due to the rapid convergence only one iteration is used and no stopping test is needed. This algorithm is discussed in the context of machines that have FPUs and legacy machines that utilize mathematical subroutine packages.

  16. Detection of Component Failures for Smart Structure Control Systems

    NASA Astrophysics Data System (ADS)

    Okubo, Hiroshi

    Uncertainties in the dynamics model of a smart structure are often of significance due to model errors caused by parameter identification errors and reduced-order modeling of the system. Design of a model-based Failure Detection and Isolation (FDI) system for smart structures, therefore, needs careful consideration regarding robustness with respect to such model uncertainties. In this paper, we proposes a new method of robust fault detection that is insensitive to the disturbances caused by unknown modeling errors while it is highly sensitive to the component failures. The capability of the robust detection algorithm is examined for the sensor failure of a flexible smart beam control system. It is shown by numerical simulations that the proposed method suppresses the disturbances due to model errors and markedly improves the detection performance.

  17. Prognostics for Microgrid Components

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav

    2012-01-01

    Prognostics is the science of predicting future performance and potential failures based on targeted condition monitoring. Moving away from the traditional reliability centric view, prognostics aims at detecting and quantifying the time to impending failures. This advance warning provides the opportunity to take actions that can preserve uptime, reduce cost of damage, or extend the life of the component. The talk will focus on the concepts and basics of prognostics from the viewpoint of condition-based systems health management. Differences with other techniques used in systems health management and philosophies of prognostics used in other domains will be shown. Examples relevant to micro grid systems and subsystems will be used to illustrate various types of prediction scenarios and the resources it take to set up a desired prognostic system. Specifically, the implementation results for power storage and power semiconductor components will demonstrate specific solution approaches of prognostics. The role of constituent elements of prognostics, such as model, prediction algorithms, failure threshold, run-to-failure data, requirements and specifications, and post-prognostic reasoning will be explained. A discussion on performance evaluation and performance metrics will conclude the technical discussion followed by general comments on open research problems and challenges in prognostics.

  18. Relative performance of algorithms for autonomous satellite orbit determination

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Peters, J. G.; Schutz, B. E.

    1981-01-01

    Limited word size in contemporary microprocessors causes numerical problems in autonomous satellite navigation applications. Numerical error introduced in navigation computations performed on small wordlength machines can cause divergence of sequential estimation algorithms. To insure filter reliability, square root algorithms have been adopted in many applications. The optimal navigation algorithm requires a careful match of the estimation algorithm, dynamic model, and numerical integrator. In this investigation, the relationship of several square root filters and numerical integration methods is evaluated to determine their relative performance for satellite navigation applications. The numerical simulations are conducted using the Phase I GPS constellation to determine the orbit of a LANDSAT-D type satellite. The primary comparison is based on computation time and relative estimation accuracy.

  19. Grover's algorithm and the secant varieties

    NASA Astrophysics Data System (ADS)

    Holweck, Frédéric; Jaffali, Hamza; Nounouh, Ismaël

    2016-09-01

    In this paper we investigate the entanglement nature of quantum states generated by Grover's search algorithm by means of algebraic geometry. More precisely we establish a link between entanglement of states generated by the algorithm and auxiliary algebraic varieties built from the set of separable states. This new perspective enables us to propose qualitative interpretations of earlier numerical results obtained by M. Rossi et al. We also illustrate our purpose with a couple of examples investigated in details.

  20. Algorithm for in-flight gyroscope calibration

    NASA Technical Reports Server (NTRS)

    Davenport, P. B.; Welter, G. L.

    1988-01-01

    An optimal algorithm for the in-flight calibration of spacecraft gyroscope systems is presented. Special consideration is given to the selection of the loss function weight matrix in situations in which the spacecraft attitude sensors provide significantly more accurate information in pitch and yaw than in roll, such as will be the case in the Hubble Space Telescope mission. The results of numerical tests that verify the accuracy of the algorithm are discussed.

  1. Supercomputers and biological sequence comparison algorithms.

    PubMed

    Core, N G; Edmiston, E W; Saltz, J H; Smith, R M

    1989-12-01

    Comparison of biological (DNA or protein) sequences provides insight into molecular structure, function, and homology and is increasingly important as the available databases become larger and more numerous. One method of increasing the speed of the calculations is to perform them in parallel. We present the results of initial investigations using two dynamic programming algorithms on the Intel iPSC hypercube and the Connection Machine as well as an inexpensive, heuristically-based algorithm on the Encore Multimax.

  2. Numerical algorithms for finite element computations on concurrent processors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    The work of several graduate students which relate to the NASA grant is briefly summarized. One student has worked on a detailed analysis of the so-called ijk forms of Gaussian elemination and Cholesky factorization on concurrent processors. Another student has worked on the vectorization of the incomplete Cholesky conjugate method on the CYBER 205. Two more students implemented various versions of Gaussian elimination and Cholesky factorization on the FLEX/32.

  3. Numerical algorithms for finite element computations on arrays of microprocessors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1981-01-01

    The development of a multicolored successive over relaxation (SOR) program for the finite element machine is discussed. The multicolored SOR method uses a generalization of the classical Red/Black grid point ordering for the SOR method. These multicolored orderings have the advantage of allowing the SOR method to be implemented as a Jacobi method, which is ideal for arrays of processors, but still enjoy the greater rate of convergence of the SOR method. The program solves a general second order self adjoint elliptic problem on a square region with Dirichlet boundary conditions, discretized by quadratic elements on triangular regions. For this general problem and discretization, six colors are necessary for the multicolored method to operate efficiently. The specific problem that was solved using the six color program was Poisson's equation; for Poisson's equation, three colors are necessary but six may be used. In general, the number of colors needed is a function of the differential equation, the region and boundary conditions, and the particular finite element used for the discretization.

  4. Evaluating numerical ODE/DAE methods, algorithms and software

    NASA Astrophysics Data System (ADS)

    Soderlind, Gustaf; Wang, Lina

    2006-01-01

    Until recently, the testing of ODE/DAE software has been limited to simple comparisons and benchmarking. The process of developing software from a mathematically specified method is complex: it entails constructing control structures and objectives, selecting iterative methods and termination criteria, choosing norms and many more decisions. Most software constructors have taken a heuristic approach to these design choices, and as a consequence two different implementations of the same method may show significant differences in performance. Yet it is common to try to deduce from software comparisons that one method is better than another. Such conclusions are not warranted, however, unless the testing is carried out under true ceteris paribus conditions. Moreover, testing is an empirical science and as such requires a formal test protocol; without it conclusions are questionable, invalid or even false.We argue that ODE/DAE software can be constructed and analyzed by proven, "standard" scientific techniques instead of heuristics. The goals are computational stability, reproducibility, and improved software quality. We also focus on different error criteria and norms, and discuss modifications to DASPK and RADAU5. Finally, some basic principles of a test protocol are outlined and applied to testing these codes on a variety of problems.

  5. Extremal polynomials and methods of optimization of numerical algorithms

    SciTech Connect

    Lebedev, V I

    2004-10-31

    Chebyshev-Markov-Bernstein-Szegoe polynomials C{sub n}(x) extremal on [-1,1] with weight functions w(x)=(1+x){sup {alpha}}(1- x){sup {beta}}/{radical}(S{sub l}(x)) where {alpha},{beta}=0,1/2 and S{sub l}(x)={pi}{sub k=1}{sup m}(1-c{sub k}T{sub l{sub k}}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w{sup 2}(x)(1-x{sup 2}){sup -1/2}. The parameters of optimal Chebyshev iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshev filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

  6. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  7. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  8. Numerical Simulation of a Solar Domestic Hot Water System

    NASA Astrophysics Data System (ADS)

    Mongibello, L.; Bianco, N.; Di Somma, M.; Graditi, G.; Naso, V.

    2014-11-01

    An innovative transient numerical model is presented for the simulation of a solar Domestic Hot Water (DHW) system. The solar collectors have been simulated by using a zerodimensional analytical model. The temperature distributions in the heat transfer fluid and in the water inside the tank have been evaluated by one-dimensional models. The reversion elimination algorithm has been used to include the effects of natural convection among the water layers at different heights in the tank on the thermal stratification. A finite difference implicit scheme has been implemented to solve the energy conservation equation in the coil heat exchanger, and the energy conservation equation in the tank has been solved by using the finite difference Euler implicit scheme. Energy conservation equations for the solar DHW components models have been coupled by means of a home-made implicit algorithm. Results of the simulation performed using as input data the experimental values of the ambient temperature and the solar irradiance in a summer day are presented and discussed.

  9. Clustering of Hadronic Showers with a Structural Algorithm

    SciTech Connect

    Charles, M.J.; /SLAC

    2005-12-13

    The internal structure of hadronic showers can be resolved in a high-granularity calorimeter. This structure is described in terms of simple components and an algorithm for reconstruction of hadronic clusters using these components is presented. Results from applying this algorithm to simulated hadronic Z-pole events in the SiD concept are discussed.

  10. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  11. An algorithm based on carrier squeezing interferometry for multi-beam phase extraction in Fizeau interferometer

    NASA Astrophysics Data System (ADS)

    Cheng, Jinlong; Gao, Zhishan; Wang, Kailiang; Yang, Zhongming; Wang, Shuai; Yuan, Qun

    2015-10-01

    Multi-beam interference will exist in the cavity of Fizeau interferometer due to the high reflectivity of test optics. The random phase shift error will be generated by some factors such as the environmental vibration, air turbulence, etc. Both these will cause phase retrieving error. We proposed a non-iterative approach called Carrier Squeezing Multi-beam Interferometry (CSMI) algorithm, which is based on the Carrier squeezing interferometry (CSI) technique to retrieve the phase distribution from multiple-beam interferograms with random phase shift errors. The intensity of multiple-beam interference was decomposed into fundamental wave and high-order harmonics, by using the Fourier series expansion. Multi-beam phase shifting interferograms with linear carrier were rearranged by row or column, to fuse one frame of spatial-temporal fringes. The lobe of the fundamental component related to the phase and the lobes of high-order harmonics and phase shift errors were separated in the frequency domain, so the correct phase was extracted by filtering out the fundamental component. Suppression of the influence from high-order harmonic components, as well as random phase shift error is validated by numerical simulations. Experiments were also executed by using the proposed CSMI algorithm for mirror with high reflection coefficient, showing its advantage comparing with normal phase retrieving algorithms.

  12. Linac Alignment Algorithm: Analysis on 1-to-1 Steering

    SciTech Connect

    Sun, Yipeng; Adolphsen, Chris; /SLAC

    2011-08-19

    In a linear accelerator, it is important to achieve a good alignment between all of its components (such as quadrupoles, RF cavities, beam position monitors et al.), in order to better preserve the beam quality during acceleration. After the survey of the main linac components, there are several beam-based alignment (BBA) techniques to be applied, to further optimize the beam trajectory and calculate the corresponding steering magnets strength. Among these techniques the most simple and straightforward one is the one-to-one (1-to-1) steering technique, which steers the beam from quad center to center, and removes the betatron oscillation from quad focusing. For a future linear collider such as the International Linear Collider (ILC), the initial beam emittance is very small in the vertical plane (flat beam with {gamma}{epsilon}{sub y} = 20-40nm), which means the alignment requirement is very tight. In this note, we evaluate the emittance growth with one-to-one correction algorithm employed, both analytically and numerically. Then the ILC main linac accelerator is taken as an example to compare the vertical emittance growth after 1-to-1 steering, both from analytical formulae and multi-particle tracking simulation. It is demonstrated that the estimated emittance growth from the derived formulae agrees well with the results from numerical simulation, with and without acceleration, respectively.

  13. Numerical methods for portfolio selection with bounded constraints

    NASA Astrophysics Data System (ADS)

    Yin, G.; Jin, Hanqing; Jin, Zhuo

    2009-11-01

    This work develops an approximation procedure for portfolio selection with bounded constraints. Based on the Markov chain approximation techniques, numerical procedures are constructed for the utility optimization task. Under simple conditions, the convergence of the approximation sequences to the wealth process and the optimal utility function is established. Numerical examples are provided to illustrate the performance of the algorithms.

  14. Numerical simulation of steady supersonic flow. [spatial marching

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.; Steger, J. L.

    1981-01-01

    A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.

  15. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  16. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  17. Mathematical and computer modeling of component surface shaping

    NASA Astrophysics Data System (ADS)

    Lyashkov, A.

    2016-04-01

    The process of shaping technical surfaces is an interaction of a tool (a shape element) and a component (a formable element or a workpiece) in their relative movements. It was established that the main objects of formation are: 1) a discriminant of a surfaces family, formed by the movement of the shape element relatively the workpiece; 2) an enveloping model of the real component surface obtained after machining, including transition curves and undercut lines; 3) The model of cut-off layers obtained in the process of shaping. When modeling shaping objects there are a lot of insufficiently solved or unsolved issues that make up a single scientific problem - a problem of qualitative shaping of the surface of the tool and then the component surface produced by this tool. The improvement of known metal-cutting tools, intensive development of systems of their computer-aided design requires further improvement of the methods of shaping the mating surfaces. In this regard, an important role is played by the study of the processes of shaping of technical surfaces with the use of the positive aspects of analytical and numerical mathematical methods and techniques associated with the use of mathematical and computer modeling. The author of the paper has posed and has solved the problem of development of mathematical, geometric and algorithmic support of computer-aided design of cutting tools based on computer simulation of the shaping process of surfaces.

  18. Simplified method for numerical modeling of fiber lasers.

    PubMed

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

    2014-12-29

    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  19. Fast Algorithms for Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  20. Randomized approximate nearest neighbors algorithm.

    PubMed

    Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir

    2011-09-20

    We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.

  1. A robust model predictive control algorithm for uncertain nonlinear systems that guarantees resolvability

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Carson, John M., III

    2006-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.

  2. Small Body GN&C Research Report: A Robust Model Predictive Control Algorithm with Guaranteed Resolvability

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet A.; Carson, John M., III

    2005-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.

  3. A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1989-01-01

    Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.

  4. Parallel projected variable metric algorithms for unconstrained optimization

    NASA Technical Reports Server (NTRS)

    Freeman, T. L.

    1989-01-01

    The parallel variable metric optimization algorithms of Straeter (1973) and van Laarhoven (1985) are reviewed, and the possible drawbacks of the algorithms are noted. By including Davidon (1975) projections in the variable metric updating, researchers can generalize Straeter's algorithm to a family of parallel projected variable metric algorithms which do not suffer the above drawbacks and which retain quadratic termination. Finally researchers consider the numerical performance of one member of the family on several standard example problems and illustrate how the choice of the displacement vectors affects the performance of the algorithm.

  5. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  6. Analytical and numerical methods; advanced computer concepts

    SciTech Connect

    Lax, P D

    1991-03-01

    This past year, two projects have been completed and a new is under way. First, in joint work with R. Kohn, we developed a numerical algorithm to study the blowup of solutions to equations with certain similarity transformations. In the second project, the adaptive mesh refinement code of Berger and Colella for shock hydrodynamic calculations has been parallelized and numerical studies using two different shared memory machines have been done. My current effort is towards the development of Cartesian mesh methods to solve pdes with complicated geometries. Most of the coming year will be spent on this project, which is joint work with Prof. Randy Leveque at the University of Washington in Seattle.

  7. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  8. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  9. Distilling the Verification Process for Prognostics Algorithms

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai

    2013-01-01

    The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.

  10. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  11. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  12. A new SPECT reconstruction algorithm based on the Novikov explicit inversion formula

    NASA Astrophysics Data System (ADS)

    Kunyansky, Leonid A.

    2001-04-01

    We present a new reconstruction algorithm for single-photon emission computed tomography. The algorithm is based on the Novikov explicit inversion formula for the attenuated Radon transform with non-uniform attenuation. Our reconstruction technique can be viewed as a generalization of both the filtered backprojection algorithm and the Tretiak-Metz algorithm. We test the performance of the present algorithm in a variety of numerical experiments. Our numerical examples show that the algorithm is capable of accurate image reconstruction even in the case of strongly non-uniform attenuation coefficient, similar to that occurring in a human thorax.

  13. Numerical linear algebra in data mining

    NASA Astrophysics Data System (ADS)

    Eldén, Lars

    Ideas and algorithms from numerical linear algebra are important in several areas of data mining. We give an overview of linear algebra methods in text mining (information retrieval), pattern recognition (classification of handwritten digits), and PageRank computations for web search engines. The emphasis is on rank reduction as a method of extracting information from a data matrix, low-rank approximation of matrices using the singular value decomposition and clustering, and on eigenvalue methods for network analysis.

  14. Numerical linear algebra for reconstruction inverse problems

    NASA Astrophysics Data System (ADS)

    Nachaoui, Abdeljalil

    2004-01-01

    Our goal in this paper is to discuss various issues we have encountered in trying to find and implement efficient solvers for a boundary integral equation (BIE) formulation of an iterative method for solving a reconstruction problem. We survey some methods from numerical linear algebra, which are relevant for the solution of this class of inverse problems. We motivate the use of our constructing algorithm, discuss its implementation and mention the use of preconditioned Krylov methods.

  15. Numerical simulation of in situ bioremediation

    SciTech Connect

    Travis, B.J.

    1998-12-31

    Models that couple subsurface flow and transport with microbial processes are an important tool for assessing the effectiveness of bioremediation in field applications. A numerical algorithm is described that differs from previous in situ bioremediation models in that it includes: both vadose and groundwater zones, unsteady air and water flow, limited nutrients and airborne nutrients, toxicity, cometabolic kinetics, kinetic sorption, subgridscale averaging, pore clogging and protozoan grazing.

  16. Numerical simulation of droplet impact on interfaces

    NASA Astrophysics Data System (ADS)

    Kahouadji, Lyes; Che, Zhizhao; Matar, Omar; Shin, Seungwon; Chergui, Jalel; Juric, Damir

    2015-11-01

    Simulations of three-dimensional droplet impact on interfaces are carried out using BLUE, a massively-parallel code based on a hybrid Front-Tracking/Level-Set algorithm for Lagrangian tracking of arbitrarily deformable phase interfaces. High resolution numerical results show fine details and features of droplet ejection, crown formation and rim instability observed under similar experimental conditions. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

  17. Numerical Simulations of Ion Cloud Dynamics

    NASA Astrophysics Data System (ADS)

    Sillitoe, Nicolas; Hilico, Laurent

    We explain how to perform accurate numerical simulations of ion cloud dynamics by discussing the relevant orders of magnitude of the characteristic times and frequencies involved in the problem and the computer requirement with respect to the ion cloud size. We then discuss integration algorithms and Coulomb force parallelization. We finally explain how to take into account collisions, cooling laser interaction and chemical reactions in a Monte Carlo approach and discuss how to use random number generators to that end.

  18. Numerical Modeling of Nanoelectronic Devices

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Oyafuso, Fabiano; Bowen, R. Chris; Boykin, Timothy

    2003-01-01

    Nanoelectronic Modeling 3-D (NEMO 3-D) is a computer program for numerical modeling of the electronic structure properties of a semiconductor device that is embodied in a crystal containing as many as 16 million atoms in an arbitrary configuration and that has overall dimensions of the order of tens of nanometers. The underlying mathematical model represents the quantummechanical behavior of the device resolved to the atomistic level of granularity. The system of electrons in the device is represented by a sparse Hamiltonian matrix that contains hundreds of millions of terms. NEMO 3-D solves the matrix equation on a Beowulf-class cluster computer, by use of a parallel-processing matrix vector multiplication algorithm coupled to a Lanczos and/or Rayleigh-Ritz algorithm that solves for eigenvalues. In a recent update of NEMO 3-D, a new strain treatment, parameterized for bulk material properties of GaAs and InAs, was developed for two tight-binding submodels. The utility of the NEMO 3-D was demonstrated in an atomistic analysis of the effects of disorder in alloys and, in particular, in bulk In(x)Ga(l-x)As and in In0.6Ga0.4As quantum dots.

  19. OpenAD : algorithm implementation user guide.

    SciTech Connect

    Utke, J.

    2004-05-13

    Research in automatic differentiation has led to a number of tools that implement various approaches and algorithms for the most important programming languages. While all these tools have the same mathematical underpinnings, the actual implementations have little in common and mostly are specialized for a particular programming language, compiler internal representation, or purpose. This specialization does not promote an open test bed for experimentation with new algorithms that arise from exploiting structural properties of numerical codes in a source transformation context. OpenAD is being designed to fill this need by providing a framework that allows for relative ease in the implementation of algorithms that operate on a representation of the numerical kernel of a program. Language independence is achieved by using an intermediate XML format and the abstraction of common compiler analyses in Open-Analysis. The intermediate format is mapped to concrete programming languages via two front/back end combinations. The design allows for reuse and combination of already implemented algorithms. We describe the set of algorithms and basic functionality currently implemented in OpenAD and explain the necessary steps to add a new algorithm to the framework.

  20. New analytical algorithm for overlay accuracy

    NASA Astrophysics Data System (ADS)

    Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo

    2012-03-01

    The extension of optical lithography to 2Xnm and beyond is often challenged by overlay control. With reduced overlay measurement error budget in the sub-nm range, conventional Total Measurement Uncertainty (TMU) data is no longer sufficient. Also there is no sufficient criterion in overlay accuracy. In recent years, numerous authors have reported new method of the accuracy of the overlay metrology: Through focus and through color. Still quantifying uncertainty in overlay measurement is most difficult work in overlay metrology. According to the ITRS roadmap, total overlay budget is getting tighter than former device node as a design rule shrink on each device node. Conventionally, the total overlay budget is defined as the square root of square sum of the following contributions: the scanner overlay performance, wafer process, metrology and mask registration. All components have been supplying sufficiently performance tool to each device nodes, delivering new scanner, new metrology tools, and new mask e-beam writers. Especially the scanner overlay performance was drastically decreased from 9nm in 8x node to 2.5nm in 3x node. The scanner overlay seems to reach the limitation the overlay performance after 3x nod. The importance of the wafer process overlay as a contribution of total wafer overlay became more important. In fact, the wafer process overlay was decreased by 3nm between DRAM 8x node and DRAM 3x node. We develop an analytical algorithm for overlay accuracy. And a concept of nondestructive method is proposed in this paper. For on product layer we discovered the layer has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. In this paper, authors suggest an analytical algorithm for overlay accuracy. And a concept of non-destructive method is proposed in this paper. For on product layers, we discovered it has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. Furthermore

  1. Algorithms for computing the multivariable stability margin

    NASA Technical Reports Server (NTRS)

    Tekawy, Jonathan A.; Safonov, Michael G.; Chiang, Richard Y.

    1989-01-01

    Stability margin for multiloop flight control systems has become a critical issue, especially in highly maneuverable aircraft designs where there are inherent strong cross-couplings between the various feedback control loops. To cope with this issue, we have developed computer algorithms based on non-differentiable optimization theory. These algorithms have been developed for computing the Multivariable Stability Margin (MSM). The MSM of a dynamical system is the size of the smallest structured perturbation in component dynamics that will destabilize the system. These algorithms have been coded and appear to be reliable. As illustrated by examples, they provide the basis for evaluating the robustness and performance of flight control systems.

  2. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-02-28

    We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  3. Data-parallel algorithms for image computing

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.

    1990-11-01

    Data-parallel algorithms for image computing on the Connection Machine are described. After a brief review of some basic programming concepts in *Lip, a parallel extension of Common Lisp, data-parallel programming paradigms based on a local (diffusion-like) model of computation, the scan model of computation, a general interprocessor communications model, and a region-based model are introduced. Algorithms for connected component labeling, distance transformation, Voronoi diagrams, finding minimum cost paths, local means, shape-from-shading, hidden surface calculations, affine transformation, oblique parallel projection, and spatial operations over regions are presented. An new algorithm for interpolating irregularly spaced data via Voronoi diagrams is also described.

  4. Asynchronous Event-Driven Particle Algorithms

    SciTech Connect

    Donev, A

    2007-08-30

    We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.

  5. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  6. Numerical studies of the nonlinear properties of composites

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Stroud, D.

    1994-01-01

    Using both numerical and analytical techniques, we investigate various ways to enhance the cubic nonlinear susceptibility χe of a composite material. We start from the exact relation χe =tsumipiχi<(E.E)2>i,lin/ E40, where χi and pi are the cubic nonlinear susceptibility and volume fraction of the ith component, E0 is the applied electric field, and i,lin is the expectation value of the electric field in the ith component, calculated in the linear limit where χi=0. In our numerical work, we represent the composite by a random resistor or impedance network, calculating the electric-field distributions by a generalized transfer-matrix algorithm. Under certain conditions, we find that χe is greatly enhanced near the percolation threshold. We also find a large enhancement for a linear fractal in a nonlinear host. In a random Drude metal-insulator composite χe is hugely enhanced especially near frequencies which correspond to the surface-plasmon resonance spectrum of the composite. At zero frequency, the random composite results are reasonably well described by a nonlinear effective-medium approximation. The finite-frequency enhancement shows very strong reproducible structure which is nearly undetectable in the linear response of the composite, and which may possibly be described by a generalized nonlinear effective-medium approximation. The fractal results agree qualitatively with a nonlinear differential effective-medium approximation. Finally, we consider a suspension of coated spheres embedded in a host. If the coating is nonlinear, we show that χe/χcoat>>1 near the surface-plasmon resonance frequency of the core particle.

  7. User`s guide for the frequency domain algorithms in the LIFE2 fatigue analysis code

    SciTech Connect

    Sutherland, H.J.; Linker, R.L.

    1993-10-01

    The LIFE2 computer code is a fatigue/fracture analysis code that is specialized to the analysis of wind turbine components. The numerical formulation of the code uses a series of cycle count matrices to describe the cyclic stress states imposed upon the turbine. However, many structural analysis techniques yield frequency-domain stress spectra and a large body of experimental loads (stress) data is reported in the frequency domain. To permit the analysis of this class of data, a Fourier analysis is used to transform a frequency-domain spectrum to an equivalent time series suitable for rainflow counting by other modules in the code. This paper describes the algorithms incorporated into the code and their numerical implementation. Example problems are used to illustrate typical inputs and outputs.

  8. Generalization of the FDTD algorithm for simulations of hydrodynamic nonlinear Drude model

    SciTech Connect

    Liu Jinjie; Brio, Moysey; Zeng Yong; Zakharian, Armis R.; Hoyer, Walter; Koch, Stephan W.; Moloney, Jerome V.

    2010-08-20

    In this paper we present a numerical method for solving a three-dimensional cold-plasma system that describes electron gas dynamics driven by an external electromagnetic wave excitation. The nonlinear Drude dispersion model is derived from the cold-plasma fluid equations and is coupled to the Maxwell's field equations. The Finite-Difference Time-Domain (FDTD) method is applied for solving the Maxwell's equations in conjunction with the time-split semi-implicit numerical method for the nonlinear dispersion and a physics based treatment of the discontinuity of the electric field component normal to the dielectric-metal interface. The application of the proposed algorithm is illustrated by modeling light pulse propagation and second-harmonic generation (SHG) in metallic metamaterials (MMs), showing good agreement between computed and published experimental results.

  9. Simple algorithm for computing the geometric measure of entanglement

    SciTech Connect

    Streltsov, Alexander; Kampermann, Hermann; Bruss, Dagmar

    2011-08-15

    We present an easy implementable algorithm for approximating the geometric measure of entanglement from above. The algorithm can be applied to any multipartite mixed state. It involves only the solution of an eigenproblem and finding a singular value decomposition; no further numerical techniques are needed. To provide examples, the algorithm was applied to the isotropic states of three qubits and the three-qubit XX model with external magnetic field.

  10. Guidance algorithms for a free-flying space robot

    NASA Technical Reports Server (NTRS)

    Brindle, A. F.; Viggh, H. E. M.; Albert, J. H.

    1989-01-01

    Robotics is a promising technology for assembly, servicing, and maintenance of platforms in space. Several aspects of planning and guidance for telesupervised and fully autonomous robotic servicers are investigated. Guidance algorithms for proximity operation of a free flyer are described. Numeric trajectory optimization is combined with artificial intelligence based obstacle avoidance. An initial algorithm and the results of its simulating platform servicing scenario are discussed. A second algorithm experiment is then proposed.

  11. Cumulative Reconstructor: fast wavefront reconstruction algorithm for Extremely Large Telescopes.

    PubMed

    Rosensteiner, Matthias

    2011-10-01

    The Cumulative Reconstructor (CuRe) is a new direct reconstructor for an optical wavefront from Shack-Hartmann wavefront sensor measurements. In this paper, the algorithm is adapted to realistic telescope geometries and the transition from modified Hudgin to Fried geometry is discussed. After a discussion of the noise propagation, we analyze the complexity of the algorithm. Our numerical tests confirm that the algorithm is very fast and accurate and can therefore be used for adaptive optics systems of Extremely Large Telescopes.

  12. Three-Dimensional SIP Imaging of Rock Core Sample: Numerical Examples

    NASA Astrophysics Data System (ADS)

    Son, J.; Kim, J.; Yi, M.

    2007-12-01

    SIP (spectral IP) method is known as complex resistivity method because it measures and uses both the magnitude and the phases. SIP method had been mainly used in the field of mineral explorations, but recently SIP method extended its application to the environmental problem, because the real and imaginary components of interpreted complex resistivity are related to the hydraulic property of subsurface. In this study, we used the SIP method to monitor the physical property change during injection of CO2 gas into a rock sample in the laboratory experiments. For this purpose, we developed three-dimensional SIP modeling and inversion algorithm based on the complex resistivity. We chose the FEM (finite element method) in the modeling algorithm, and we deformed a rectangular grid to a cylinder shape to build the cylinder model, like core samples. To verify the SIP modeling algorithm, we tested our algorithm to a simple isolated block model in homogeneous half space and compare its results with those from three-dimensional integral equation method. Results from the different two methods are quite well matched. To verify the inversion algorithm developed, we applied it to the simple isolated earth model and compared its inversion result with true model. Inverted result shows smoother distribution of conductivity and phase than true model due to the smoothness constraints which are necessary for the stability of inversion. Although the values of conductivity and phase are somewhat underestimated than true value and its distribution is smoother than the given model, we can clearly see the location of conductive anomaly. We could confirm the validity of developed inversion algorithm from these results. After finishing the verification, we applied the developed algorithm to imaging of a rock core model. The core model has conductive and reactive anomalous body at the center of the model. We simulate the SIP survey using 16 electrodes on the surface of the model, and then

  13. Numerical and Analytic Studies of Random-Walk Models.

    NASA Astrophysics Data System (ADS)

    Li, Bin

    We begin by recapitulating the universality approach to problems associated with critical systems, and discussing the role that random-walk models play in the study of phase transitions and critical phenomena. As our first numerical simulation project, we perform high-precision Monte Carlo calculations for the exponents of the intersection probability of pairs and triplets of ordinary random walks in 2 dimensions, in order to test the conformal-invariance theory predictions. Our numerical results strongly support the theory. Our second numerical project aims to test the hyperscaling relation dnu = 2 Delta_4-gamma for self-avoiding walks in 2 and 3 dimensions. We apply the pivot method to generate pairs of self-avoiding walks, and then for each pair, using the Karp-Luby algorithm, perform an inner -loop Monte Carlo calculation of the number of different translates of one walk that makes at least one intersection with the other. Applying a least-squares fit to estimate the exponents, we have obtained strong numerical evidence that the hyperscaling relation is true in 3 dimensions. Our great amount of data for walks of unprecedented length(up to 80000 steps), yield a updated value for the end-to-end distance and radius of gyration exponent nu = 0.588 +/- 0.001 (95% confidence limit), which comes out in good agreement with the renormalization -group prediction. In an analytic study of random-walk models, we introduce multi-colored random-walk models and generalize the Symanzik and B.F.S. random-walk representations to the multi-colored case. We prove that the zero-component lambdavarphi^2psi^2 theory can be represented by a two-color mutually -repelling random-walk model, and it becomes the mutually -avoiding walk model in the limit lambda to infty. However, our main concern and major break-through lies in the study of the two-point correlation function for the lambda varphi^2psi^2 theory with N > 0 components. By representing it as a two-color random-walk expansion

  14. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  15. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  16. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models

    PubMed Central

    Wise, S.M.; Lowengrub, J.S.; Cristini, V.

    2010-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  17. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models.

    PubMed

    Wise, S M; Lowengrub, J S; Cristini, V

    2011-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  18. Quality control algorithms for rainfall measurements

    NASA Astrophysics Data System (ADS)

    Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs

    2005-09-01

    One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).

  19. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  20. Genetic algorithms for the vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2016-06-01

    The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.

  1. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  2. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  3. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  4. Principal component analysis implementation in Java

    NASA Astrophysics Data System (ADS)

    Wójtowicz, Sebastian; Belka, Radosław; Sławiński, Tomasz; Parian, Mahnaz

    2015-09-01

    In this paper we show how PCA (Principal Component Analysis) method can be implemented using Java programming language. We consider using PCA algorithm especially in analysed data obtained from Raman spectroscopy measurements, but other applications of developed software should also be possible. Our goal is to create a general purpose PCA application, ready to run on every platform which is supported by Java.

  5. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  6. A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media

    SciTech Connect

    Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.

    2008-04-01

    In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.

  7. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  8. Numerical simulation of photoexcited polaron states in water

    SciTech Connect

    Zemlyanaya, E. V. Volokhova, A. V.; Amirkhanov, I. V.; Puzynin, I. V.; Puzynina, T. P.; Rikhvitskiy, V. S.; Lakhno, V. D.; Atanasova, P. Kh.

    2015-10-28

    We consider the dynamic polaron model of the hydrated electron state on the basis of a system of three nonlinear partial differential equations with appropriate initial and boundary conditions. A parallel numerical algorithm for the numerical solution of this system has been developed. Its effectiveness has been tested on a few multi-processor systems. A numerical simulation of the polaron states formation in water under the action of the ultraviolet range laser irradiation has been performed. The numerical results are shown to be in a reasonable agreement with experimental data and theoretical predictions.

  9. Development of a Multiview Time Domain Imaging Algorithm (MTDI) with a Fermat Correction

    SciTech Connect

    Fisher, K A; Lehman, S K; Chambers, D H

    2004-09-22

    An imaging algorithm is presented based on the standard assumption that the total scattered field can be separated into an elastic component with monopole like dependence and an inertial component with a dipole like dependence. The resulting inversion generates two separate image maps corresponding to the monopole and dipole terms of the forward model. The complexity of imaging flaws and defects in layered elastic media is further compounded by the existence of high contrast gradients in either sound speed and/or density from layer to layer. To compensate for these gradients, we have incorporated Fermat's method of least time into our forward model to determine the appropriate delays between individual source-receiver pairs. Preliminary numerical and experimental results are in good agreement with each other.

  10. Composite Algorithms in the Teaching of Mathematical Methods.

    ERIC Educational Resources Information Center

    Dupee, B.; Martinez, Raquel; Tapia, Santiago

    1999-01-01

    Describes a prototype package that uses algorithms created within the symbolic algebra system Axiom, numerical routines from the NAG Libraries together with the easy-to-use facilities of modern graphical interfaces. Considers the implementation of different techniques, both symbolic and numeric, for the analysis and calculation of interpolating…

  11. Subsurface biological activity zone detection using genetic search algorithms

    SciTech Connect

    Mahinthakumar, G.; Gwo, J.P.; Moline, G.R.; Webb, O.F.

    1999-12-01

    Use of generic search algorithms for detection of subsurface biological activity zones (BAZ) is investigated through a series of hypothetical numerical biostimulation experiments. Continuous injection of dissolved oxygen and methane with periodically varying concentration stimulates the cometabolism of indigenous methanotropic bacteria. The observed breakthroughs of methane are used to deduce possible BAZ in the subsurface. The numerical experiments are implemented in a parallel computing environment to make possible the large number of simultaneous transport simulations required by the algorithm. The results show that genetic algorithms are very efficient in locating multiple activity zones, provided the observed signals adequately sample the BAZ.

  12. LCD motion blur: modeling, analysis, and algorithm.

    PubMed

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596

  13. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  14. Numerical predictions in acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1992-01-01

    Computational Aeroacoustics (CAA) involves the calculation of the sound produced by a flow as well as the underlying flowfield itself from first principles. This paper describes the numerical challenges of CAA and recent research efforts to overcome these challenges. In addition, it includes the benefits of CAA in removing restrictions of linearity, single frequency, constant parameters, low Mach numbers, etc. found in standard acoustic analyses as well as means for evaluating the validity of these numerical approaches. Finally, numerous applications of CAA to both classical as well as modern problems of concern to the aerospace industry are presented.

  15. Numerical predictions in acoustics

    NASA Astrophysics Data System (ADS)

    Hardin, Jay C.

    Computational Aeroacoustics (CAA) involves the calculation of the sound produced by a flow as well as the underlying flowfield itself from first principles. This paper describes the numerical challenges of CAA and recent research efforts to overcome these challenges. In addition, it includes the benefits of CAA in removing restrictions of linearity, single frequency, constant parameters, low Mach numbers, etc. found in standard acoustic analyses as well as means for evaluating the validity of these numerical approaches. Finally, numerous applications of CAA to both classical as well as modern problems of concern to the aerospace industry are presented.

  16. Numerical modelling of the nonlinear evolutionary equations on the basis of an inverse scattering method

    NASA Astrophysics Data System (ADS)

    Grigorov, Igor V.

    2009-12-01

    In article the algorithm of numerical modelling of the nonlinear equation of Korteweg-de Vrieze which generates nonlinear algorithm of digital processing of signals is considered. For realisation of the specified algorithm it is offered to use a inverse scattering method (ISM). Algorithms of direct and return spectral problems, and also problems of evolution of the spectral data are in detail considered. Results of modelling are resulted.

  17. Last-passage Monte Carlo algorithm for mutual capacitance.

    PubMed

    Hwang, Chi-Ok; Given, James A

    2006-08-01

    We develop and test the last-passage diffusion algorithm, a charge-based Monte Carlo algorithm, for the mutual capacitance of a system of conductors. The first-passage algorithm is highly efficient because it is charge based and incorporates importance sampling; it averages over the properties of Brownian paths that initiate outside the conductor and terminate on its surface. However, this algorithm does not seem to generalize to mutual capacitance problems. The last-passage algorithm, in a sense, is the time reversal of the first-passage algorithm; it involves averages over particles that initiate on an absorbing surface, leave that surface, and diffuse away to infinity. To validate this algorithm, we calculate the mutual capacitance matrix of the circular-disk parallel-plate capacitor and compare with the known numerical results. Good agreement is obtained.

  18. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  19. An efficient algorithm for estimating noise covariances in distributed systems

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Cohn, S. E.; Ghil, M.; Dalcher, A.

    1985-01-01

    An efficient computational algorithm for estimating the noise covariance matrices of large linear discrete stochatic-dynamic systems is presented. Such systems arise typically by discretizing distributed-parameter systems, and their size renders computational efficiency a major consideration. The proposed adaptive filtering algorithm is based on the ideas of Belanger, and is algebraically equivalent to his algorithm. The earlier algorithm, however, has computational complexity proportional to p to the 6th, where p is the number of observations of the system state, while the new algorithm has complexity proportional to only p-cubed. Further, the formulation of noise covariance estimation as a secondary filter, analogous to state estimation as a primary filter, suggests several generalizations of the earlier algorithm. The performance of the proposed algorithm is demonstrated for a distributed system arising in numerical weather prediction.

  20. Improved local linearization algorithm for solving the quaternion equations

    NASA Technical Reports Server (NTRS)

    Yen, K.; Cook, G.

    1980-01-01

    The objective of this paper is to develop a new and more accurate local linearization algorithm for numerically solving sets of linear time-varying differential equations. Of special interest is the application of this algorithm to the quaternion rate equations. The results are compared, both analytically and experimentally, with previous results using local linearization methods. The new algorithm requires approximately one-third more calculations per step than the previously developed local linearization algorithm; however, this disadvantage could be reduced by using parallel implementation. For some cases the new algorithm yields significant improvement in accuracy, even with an enlarged sampling interval. The reverse is true in other cases. The errors depend on the values of angular velocity, angular acceleration, and integration step size. One important result is that for the worst case the new algorithm can guarantee eigenvalues nearer the region of stability than can the previously developed algorithm.

  1. On the numerical integration of FPU-like systems

    NASA Astrophysics Data System (ADS)

    Benettin, G.; Ponno, A.

    2011-03-01

    This paper concerns the numerical integration of systems of harmonic oscillators coupled by nonlinear terms, like the common FPU models. We show that the most used integration algorithm, namely leap-frog, behaves very gently with such models, preserving in a beautiful way some peculiar features which are known to be very important in the dynamics, in particular the “selection rules” which regulate the interaction among normal modes. This explains why leap-frog, in spite of being a low order algorithm, behaves so well, as numerical experimentalists always observed. At the same time, we show how the algorithm can be improved by introducing, at a low cost, a “counterterm” which eliminates the dominant numerical error.

  2. A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    NASA Technical Reports Server (NTRS)

    Shroff, Gautam

    1989-01-01

    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

  3. (Imegration of numeric and symbolic for scientific computation): Technical progress report, December 1, 1981-November 30, 1982

    SciTech Connect

    Not Available

    1982-01-01

    The emphasis of this research was on (1) a coupling of the construction of workable numerical algorithms, and (2) algorithms and systems for interfacing numerical and symbolic computing techniques. The specific items which were investigated were: An asymptotic analysis of basic iterative methods to matrix equations arising from finite element methods; The use of symbolic and numerical techniques on the Whittaker conjecture in function theory; and The integration of symbolic (exact) and numerical (approximate) computing techniques.

  4. A frictional sliding algorithm for liquid droplets

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2016-08-01

    This work presents a new frictional sliding algorithm for liquid menisci in contact with solid substrates. In contrast to solid-solid contact, the liquid-solid contact behavior is governed by the contact line, where a contact angle forms and undergoes hysteresis. The new algorithm admits arbitrary meniscus shapes and arbitrary substrate roughness, heterogeneity and compliance. It is discussed and analyzed in the context of droplet contact, but it also applies to liquid films and solids with surface tension. The droplet is modeled as a stabilized membrane enclosing an incompressible medium. The contact formulation is considered rate-independent such that hydrostatic conditions apply. Three distinct contact algorithms are needed to describe the cases of frictionless surface contact, frictionless line contact and frictional line contact. For the latter, a predictor-corrector algorithm is proposed in order to enforce the contact conditions at the contact line and thus distinguish between the cases of advancing, pinning and receding. The algorithms are discretized within a monolithic finite element formulation. Several numerical examples are presented to illustrate the numerical and physical behavior of sliding droplets.

  5. Carbon export algorithm advancements in models

    NASA Astrophysics Data System (ADS)

    Çağlar Yumruktepe, Veli; Salihoğlu, Barış

    2015-04-01

    The rate at which anthropogenic CO2 is absorbed by the oceans remains a critical question under investigation by climate researchers. Construction of a complete carbon budget, requires better understanding of air-sea exchanges and the processes controlling the vertical and horizontal transport of carbon in the ocean, particularly the biological carbon pump. Improved parameterization of carbon sequestration within ecosystem models is vital to better understand and predict changes in the global carbon cycle. Due to the complexity of processes controlling particle aggregation, sinking and decomposition, existing ecosystem models necessarily parameterize carbon sequestration using simple algorithms. Development of improved algorithms describing carbon export and sequestration, suitable for inclusion in numerical models is an ongoing work. Existing unique algorithms used in the state-of-the art ecosystem models and new experimental results obtained from mesocosm experiments and open ocean observations have been inserted into a common 1D pelagic ecosystem model for testing purposes. The model was implemented to the timeseries stations in the North Atlantic (BATS, PAP and ESTOC) and were evaluated with datasets of carbon export. Targetted topics of algorithms were PFT functional types, grazing and vertical movement of zooplankton, and remineralization, aggregation and ballasting dynamics of organic matter. Ultimately it is intended to feed improved algorithms to the 3D modelling community, for inclusion in coupled numerical models.

  6. Numerical simulations of plasmas

    SciTech Connect

    Dnestrovskii, Y.N.; Kostomarov, D.P.

    1986-01-01

    This book presents a modern, consistent, and systematic development of numerical computer simulation of plasmas in controlled thermonuclear fusion. The authors focus on recent Soviet research in mathematical modeling of Tokomak plasmas and present kinetic hydrodynamic and transport models.

  7. Rocket engine numerical simulator

    NASA Technical Reports Server (NTRS)

    Davidian, Ken

    1993-01-01

    The topics are presented in viewgraph form and include the following: a rocket engine numerical simulator (RENS) definition; objectives; justification; approach; potential applications; potential users; RENS work flowchart; RENS prototype; and conclusion.

  8. Rocket engine numerical simulation

    NASA Technical Reports Server (NTRS)

    Davidian, Ken

    1993-01-01

    The topics are presented in view graph form and include the following: a definition of the rocket engine numerical simulator (RENS); objectives; justification; approach; potential applications; potential users; RENS work flowchart; RENS prototype; and conclusions.

  9. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  10. Summary of research in applied mathematics, numerical analysis, and computer sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  11. Numerical solution of multi-dimensional compressible reactive flow using a parallel wavelet adaptive multi-resolution method

    NASA Astrophysics Data System (ADS)

    Grenga, Temistocle

    The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.

  12. Numerical Techniques in Acoustics

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J. (Compiler)

    1985-01-01

    This is the compilation of abstracts of the Numerical Techniques in Acoustics Forum held at the ASME's Winter Annual Meeting. This forum was for informal presentation and information exchange of ongoing acoustic work in finite elements, finite difference, boundary elements and other numerical approaches. As part of this forum, it was intended to allow the participants time to raise questions on unresolved problems and to generate discussions on possible approaches and methods of solution.

  13. Hardware-accelerated Components for Hybrid Computing Systems

    SciTech Connect

    Chavarría-Miranda, Daniel; Nieplocha, Jaroslaw; Gorton, Ian

    2008-10-31

    We present a study on the use of component technology for encapsulating platform-specific hardwareaccelerated algorithms on hybrid HPC systems. Our research shows that component technology can have significant benefits from a software engineering pointof- view to increase encapsulation, portability and reduce or eliminate platform dependence for hardwareaccelerated algorithms. As a demonstration of this concept, we discuss our experience in designing, implementing and integrating an FPGA-accelerated kernel for Polygraph, an application in computational proteomics.

  14. Batch process monitoring based on multiple-phase online sorting principal component analysis.

    PubMed

    Lv, Zhaomin; Yan, Xuefeng; Jiang, Qingchao

    2016-09-01

    Existing phase-based batch or fed-batch process monitoring strategies generally have two problems: (1) phase number, which is difficult to determine, and (2) uneven length feature of data. In this study, a multiple-phase online sorting principal component analysis modeling strategy (MPOSPCA) is proposed to monitor multiple-phase batch processes online. Based on all batches of off-line normal data, a new multiple-phase partition algorithm is proposed, where k-means and a defined average Euclidean radius are employed to determine the multiple-phase data set and phase number. Principal component analysis is then applied to build the model in each phase, and all the components are retained. In online monitoring, the Euclidean distance is used to select the monitoring model. All the components undergo online sorting through a parameter defined by Bayesian inference (BI). The first several components are retained to calculate the T(2) statistics. Finally, the respective probability indices of [Formula: see text] is obtained using BI as the moving average strategy. The feasibility and effectiveness of MPOSPCA are demonstrated through a simple numerical example and the fed-batch penicillin fermentation process.

  15. Fast Fourier Transform algorithm design and tradeoffs

    NASA Technical Reports Server (NTRS)

    Kamin, Ray A., III; Adams, George B., III

    1988-01-01

    The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.

  16. Case study of isosurface extraction algorithm performance

    SciTech Connect

    Sutton, P M; Hansen, C D; Shen, H; Schikore, D

    1999-12-14

    Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.

  17. Algorithm For Hypersonic Flow In Chemical Equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  18. Algorithmic Differentiation for Calculus-based Optimization

    NASA Astrophysics Data System (ADS)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  19. Principal Component Geostatistical Approach (PCGA) for Large-Scale and Joint Subsurface Inverse Problems

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Kitanidis, P. K.

    2014-12-01

    The geostatistical approach (GA) to inversion has been applied to many engineering applications to estimate unknown parameter functions and quantify the uncertainty in estimation. Thanks to recent advances in sensor technology, large-scale/joint inversions have become more common and the implementation of the traditional GA algorithm would require thousands of expensive numerical simulation runs, which would be computationally infeasible. To overcome the computational challenges, we present the Principal Component Geostatistical Approach (PCGA) that makes use of leading principal components of the prior information to avoid expensive sensitivity computations and obtain an approximate GA solution and its uncertainty with a few hundred numerical simulation runs. As we show in this presentation, the PCGA estimate is close to, even almost same as the estimate obtained from full-model implemented GA while one can reduce the computation time by the order of 10 or more in most practical cases. Furthermore, our method is "black-box" in the sense that any numerical simulation software can be linked to PCGA to perform the geostatistical inversion. This enables a hassle-free implementation of GA to multi-physics problems and joint inversion with different types of measurements such as hydrologic, chemical, and geophysical data obviating the need to explicitly compute the sensitivity of measurements through expensive coupled numerical simulations. Lastly, the PCGA is easily implemented to run the numerical simulations in parallel, thus taking advantage of high performance computing environments. We show the effectiveness and efficiency of our method with several examples such as 3-D transient hydraulic tomography, joint inversion of head and tracer data and geochemical heterogeneity identification.

  20. Semioptimal practicable algorithmic cooling

    NASA Astrophysics Data System (ADS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.