Science.gov

Sample records for algorithm components numerical

  1. Optimizing connected component labeling algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2005-04-01

    This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. To assign a label to a new object, most connected component labeling algorithms use a scanning step that examines some of its neighbors. The first strategy exploits the dependencies among them to reduce the number of neighbors examined. When considering 8-connected components in a 2D image, this can reduce the number of neighbors examined from four to one in many cases. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. Using an array instead of the pointer based rooted trees speeds up the connected component labeling algorithms by a factor of 5 ~ 100 in our tests on random binary images.

  2. Numerical Algorithms and Parallel Tasking.

    DTIC Science & Technology

    1984-07-01

    34 Principal Investigator, Virginia Klema, Research Staff, George Cybenko and Elizabeth Ducot . During the period, May 15, 1983 through May 14, 1984...Virginia Klema and Elizabeth Ducot have been supported for four months, and George Cybenko has been supported for one month. During this time system...algorithms or applications is the responsibility of the user. Virginia Klema and Elizabeth Ducot presented a description of the concurrent computing

  3. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  4. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  5. Trees, bialgebras and intrinsic numerical algorithms

    NASA Technical Reports Server (NTRS)

    Crouch, Peter; Grossman, Robert; Larson, Richard

    1990-01-01

    Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.

  6. Numerical linear algebra algorithms and software

    NASA Astrophysics Data System (ADS)

    Dongarra, Jack J.; Eijkhout, Victor

    2000-11-01

    The increasing availability of advanced-architecture computers has a significant effect on all spheres of scientific computation, including algorithm research and software development in numerical linear algebra. Linear algebra - in particular, the solution of linear systems of equations - lies at the heart of most calculations in scientific computing. This paper discusses some of the recent developments in linear algebra designed to exploit these advanced-architecture computers. We discuss two broad classes of algorithms: those for dense, and those for sparse matrices.

  7. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  8. Multiresolution representation and numerical algorithms: A brief review

    NASA Technical Reports Server (NTRS)

    Harten, Amiram

    1994-01-01

    In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.

  9. Fast deterministic algorithm for EEE components classification

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, L. A.; Antamoshkin, A. N.; Masich, I. S.

    2015-10-01

    Authors consider the problem of automatic classification of the electronic, electrical and electromechanical (EEE) components based on results of the test control. Electronic components of the same type used in a high- quality unit must be produced as a single production batch from a single batch of the raw materials. Data of the test control are used for splitting a shipped lot of the components into several classes representing the production batches. Methods such as k-means++ clustering or evolutionary algorithms combine local search and random search heuristics. The proposed fast algorithm returns a unique result for each data set. The result is comparatively precise. If the data processing is performed by the customer of the EEE components, this feature of the algorithm allows easy checking of the results by a producer or supplier.

  10. Software Management Environment (SME): Components and algorithms

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1994-01-01

    This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'

  11. A New Component Labelling And Merging Algorithm

    NASA Astrophysics Data System (ADS)

    Lochovsky, Amelia F.

    1987-10-01

    Component labelling is an important part of region analysis in image processing. Component labelling consists of assigning labels to pixels in the image such that adjacent pixels are given the same labels. There are various approaches to component labelling. Some require random access to the processed image; some assume special structure of the image such as a quad tree. Algorithms based on sequential scan of the image are attractive to hardware implementation. One method of labelling is based on a fixed size local window which includes the previous line. Due to the fixed size window and the sequential fashion of the labelling process, different branches of the same object may be given different labels and later found to be connected to each other. These labels are con-sidered to be equivalent and must later be collected to correctly represent one single object. This approach can be found in [F,FE,R]. Assume an input binary image of size NxM. Using these labelling algorithms, the number of equivalent pair generated is bounded by O(N*M). The number of distinct labels is also bounded by O(N*M). There is no known algorithm that merge the equivalent label pairs in time linear to the number of pairs, that is in time bounded by O(N*M). We propose a new labelling algorithm which interleaves the labelling with the merging process. The labelling and the merging are combined in one algorithm. Merged label information is kept in an equivalent table which is used to guide the labelling. In general , the algorithm produces fewer equivalent label pairs. The combined labelling and merging algorithm is O(N*M), where NxM is the size of the image. Section II describes the algorithm. Section III gives some examples We discuss implementation issues in section IV and further discussion and conclusion are given in Section V.

  12. Component Labeling Algorithm For Video Rate Processing

    NASA Astrophysics Data System (ADS)

    Gotoh, Toshiyuki; Ohta, Yoshiyuki; Yoshida, Masumi; Shirai, Yoshio

    1987-10-01

    In this paper, we propose a raster scanning algorithm for component labeling, which enables processing under pipeline architecture. In the raster scanning algorithm, labels are provisionally assigned to each pixel of components and, at the same time, the connectivities of labels are detected at first scan. Those labels are classified into groups based on the connectivities. Finally provisional labels are updated using the result of classification and a unique label is assigned to each pixel of components. However, in the conventional algorithm, the classification process needs a vast number of operations. This prevents realizing pipeline processing. We have developed a method of preprocessing to reduce the number of provisional labels, which limits the number of label connectivities. We have also developed a new classification method whose operation is proportionate to only the number of label connectivities itself. We have made experiments with computer simulation to verify this algorithm. The experimental results show that we can process 512 x 512 x 8 bit images at video rate(1/30 sec. per 1 image) when this algorithm is implemented on hardware.

  13. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  14. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  15. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1982-01-01

    Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.

  16. Numerical Algorithm for Delta of Asian Option.

    PubMed

    Zhang, Boxiang; Yu, Yang; Wang, Weiguo

    2015-01-01

    We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options.

  17. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  18. A component-labeling algorithm based on contour tracing

    NASA Astrophysics Data System (ADS)

    Qiu, Liudong; Li, Zushu

    2007-12-01

    A new method for finding connected components from binary images is presented in this paper. The main step of this method is to use a contour tracing technique to detect component contours, and use the information of contour to fill in interior areas. All the component points are traced by this algorithm in a single pass and are assigned either a new label or the same label of the contour pixels. Comparative experiment results show that Our algorithm, moreover, is a fast method that not only labels components but also extracts component contours at the same time, which proves to be more useful than those algorithms that only label components.

  19. Numerical comparison of Kalman filter algorithms - Orbit determination case study

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Thornton, C. L.

    1977-01-01

    Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

  20. A Numerical Instability in an ADI Algorithm for Gyrokinetics

    SciTech Connect

    E.A. Belli; G.W. Hammett

    2004-12-17

    We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v{sub {parallel}} {partial_derivative}/{partial_derivative}z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms.

  1. Experiences with an adaptive mesh refinement algorithm in numerical relativity.

    NASA Astrophysics Data System (ADS)

    Choptuik, M. W.

    An implementation of the Berger/Oliger mesh refinement algorithm for a model problem in numerical relativity is described. The principles of operation of the method are reviewed and its use in conjunction with leap-frog schemes is considered. The performance of the algorithm is illustrated with results from a study of the Einstein/massless scalar field equations in spherical symmetry.

  2. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1981-01-01

    Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.

  3. An efficient algorithm for numerical airfoil optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1979-01-01

    A new optimization algorithm is presented. The method is based on sequential application of a second-order Taylor's series approximation to the airfoil characteristics. Compared to previous methods, design efficiency improvements of more than a factor of 2 are demonstrated. If multiple optimizations are performed, the efficiency improvements are more dramatic due to the ability of the technique to utilize existing data. The method is demonstrated by application to subsonic and transonic airfoil design but is a general optimization technique and is not limited to a particular application or aerodynamic analysis.

  4. A connected component labeling algorithm for wheat root thinned image

    NASA Astrophysics Data System (ADS)

    Mu, ShaoMin; Zha, XuHeng; Du, HaiYang; Hao, QingBo; Chang, TengTeng

    Measuring wheat root length need manual measure by measuring rule, waste time and energy, low precision, aiming at this problem in this paper a connected component labeling algorithm for wheat root thinned image is presented. The algorithm realized on the basis of regional growth thought by dynamic queue list, only need one scan can finish label process. Aiming at label of wheat root thinned image, the algorithm compared with three algorithms, the experimental results show that the algorithm effect is good and suited to connecting component labeling for wheat root thinned image.

  5. Concurrent Computing: Numerical Algorithms and Some Applications.

    DTIC Science & Technology

    1986-07-15

    determinant of the harmonic frequencies This result was obtained via a combination of relationships using classical trigonometric moment theory and...component, the out put management subsystem, is the most problem dependent. Current plans call for the design of basic tools for displaying results which...will be augmented as particular applications are tried. During this same time period, we plan to establish a network linking the project’s concurrent

  6. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  7. Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Gatski, Thomas B.

    1997-01-01

    A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.

  8. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  9. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  10. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  11. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  12. Numerical algorithms for the atomistic dopant profiling of semiconductor materials

    NASA Astrophysics Data System (ADS)

    Aghaei Anvigh, Samira

    In this dissertation, we investigate the possibility to use scanning microscopy such as scanning capacitance microscopy (SCM) and scanning spreading resistance microscopy (SSRM) for the "atomistic" dopant profiling of semiconductor materials. For this purpose, we first analyze the discrete effects of random dopant fluctuations (RDF) on SCM and SSRM measurements with nanoscale probes and show that RDF significantly affects the differential capacitance and spreading resistance of the SCM and SSRM measurements if the dimension of the probe is below 50 nm. Then, we develop a mathematical algorithm to compute the spatial coordinates of the ionized impurities in the depletion region using a set of scanning microscopy measurements. The proposed numerical algorithm is then applied to extract the (x, y, z) coordinates of ionized impurities in the depletion region in the case of a few semiconductor materials with different doping configuration. The numerical algorithm developed to solve the above inverse problem is based on the evaluation of doping sensitivity functions of the differential capacitance, which show how sensitive the differential capacitance is to doping variations at different locations. To develop the numerical algorithm we first express the doping sensitivity functions in terms of the Gâteaux derivative of the differential capacitance, use Riesz representation theorem, and then apply a gradient optimization approach to compute the locations of the dopants. The algorithm is verified numerically using 2-D simulations, in which the C-V curves are measured at 3 different locations on the surface of the semiconductor. Although the cases studied in this dissertation are much idealized and, in reality, the C-V measurements are subject to noise and other experimental errors, it is shown that if the differential capacitance is measured precisely, SCM measurements can be potentially used for the "atomistic" profiling of ionized impurities in doped semiconductors.

  13. Determining the Numerical Stability of Quantum Chemistry Algorithms.

    PubMed

    Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim

    2011-08-09

    We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided.

  14. An algorithm for the numerical solution of linear differential games

    SciTech Connect

    Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.

  15. Two Strategies to Speed up Connected Component LabelingAlgorithms

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Suzuki, Kenji

    2005-11-13

    This paper presents two new strategies to speed up connectedcomponent labeling algorithms. The first strategy employs a decisiontreeto minimize the work performed in the scanning phase of connectedcomponent labeling algorithms. The second strategy uses a simplifiedunion-find data structure to represent the equivalence information amongthe labels. For 8-connected components in atwo-dimensional (2D) image,the first strategy reduces the number of neighboring pixels visited from4 to7/3 on average. In various tests, using a decision tree decreases thescanning time by a factor of about 2. The second strategy uses a compactrepresentation of the union-find data structure. This strategysignificantly speeds up the labeling algorithms. We prove analyticallythat a labeling algorithm with our simplified union-find structure hasthe same optimal theoretical time complexity as do the best labelingalgorithms. By extensive experimental measurements, we confirm theexpected performance characteristics of the new labeling algorithms anddemonstrate that they are faster than other optimal labelingalgorithms.

  16. Algorithms for the Fractional Calculus: A Selection of Numerical Methods

    NASA Technical Reports Server (NTRS)

    Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.

    2003-01-01

    Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.

  17. Canonical algorithms for numerical integration of charged particle motion equations

    NASA Astrophysics Data System (ADS)

    Efimov, I. N.; Morozov, E. A.; Morozova, A. R.

    2017-02-01

    A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.

  18. The development and evaluation of numerical algorithms for MIMD computers

    NASA Technical Reports Server (NTRS)

    Voigt, Robert G.

    1990-01-01

    Two activities were pursued under this grant. The first was a visitor program to conduct research on numerical algorithms for MIMD computers. The program is summarized in the following attachments. Attachment A - List of Researchers Supported; Attachment B - List of Reports Completed; and Attachment C - Reports. The second activity was a workshop on the Control of fluid Dynamic Systems held on March 28 to 29, 1989. The workshop is summarized in attachments. Attachment D - Workshop Summary; and Attachment E - List of Workshop Participants.

  19. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.

    2016-01-01

    Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

  20. A numerical algorithm for endochronic plasticity and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Valanis, K. C.; Fan, J.

    1985-01-01

    A numerical algorithm based on the finite element method of analysis of the boundary value problem in a continuum is presented, in the case where the plastic response of the material is given in the context of endochronic plasticity. The relevant constitutive equation is expressed in incremental form and plastic effects are accounted for by the method of an induced pseudo-force in the matrix equations. The results of the analysis are compared with observed values in the case of a plate with two symmetric notches and loaded longitudinally in its own plane. The agreement between theory and experiment is excellent.

  1. Algorithm-Based Fault Tolerance for Numerical Subroutines

    NASA Technical Reports Server (NTRS)

    Tumon, Michael; Granat, Robert; Lou, John

    2007-01-01

    A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

  2. CxCxC: compressed connected components labeling algorithm

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Dwivedi, Shekhar

    2007-03-01

    We propose Compressed Connected Components (CxCxC), a new fast algorithm for labeling connected components in binary images making use of compression. We break the given 3D image into non-overlapping 2x2x2 cube of voxels (2x2 square of pixels for 2D) and encode these binary values as the bits of a single decimal integer. We perform the connected component labeling on the resulting compressed data set. A recursive labeling approach by the use of smart-masks on the encoded decimal values is performed. The output is finally decompressed back to the original size by decimal-to-binary conversion of the cubes to retrieve the connected components in a lossless fashion. We demonstrate the efficacy of such encoding and labeling for large data sets (up to 1392 x 1040 for 2D and 512 x 512 x 336 for 3D). CxCxC reports a speed gain of 4x for 2D and 12x for 3D with memory savings of 75% for 2D and 88% for 3D over conventional (recursive growing of component labels) connected components algorithm. We also compare our method with those of VTK and ITK and find that we outperform both with speed gains of 3x and 6x for 3D. These features make CxCxC highly suitable for medical imaging and multi-media applications where the size of data sets and the number of connected components can be very large.

  3. Direct Numerical Simulation of Combustion Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Owoyele, Opeoluwa; Echekki, Tarek

    2016-11-01

    We investigate the potential of accelerating chemistry integration during the direct numerical simulation (DNS) of complex fuels based on the transport equations of representative scalars that span the desired composition space using principal component analysis (PCA). The transported principal components (PCs) offer significant potential to reduce the computational cost of DNS through a reduction in the number of transported scalars, as well as the spatial and temporal resolution requirements. The strategy is demonstrated using DNS of a premixed methane-air flame in a 2D vortical flow and is extended to the 3D geometry to further demonstrate the computational efficiency of PC transport. The PCs are derived from a priori PCA of a subset of the full thermo-chemical scalars' vector. The PCs' chemical source terms and transport properties are constructed and tabulated in terms of the PCs using artificial neural networks (ANN). Comparison of DNS based on a full thermo-chemical state and DNS based on PC transport based on 6 PCs shows excellent agreement even for species that are not included in the PCA reduction. The transported PCs reproduce some of the salient features of strongly curved and strongly strained flames. The 2D DNS results also show a significant reduction of two orders of magnitude in the computational cost of the simulations, which enables an extension of the PCA approach to 3D DNS under similar computational requirements. This work was supported by the National Science Foundation Grant DMS-1217200.

  4. A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries

    SciTech Connect

    Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P

    2003-12-15

    We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.

  5. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  6. Numerical algorithms for steady and unsteady incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Hafez, Mohammed; Dacles, Jennifer

    1989-01-01

    The numerical analysis of the incompressible Navier-Stokes equations are becoming important tools in the understanding of some fluid flow problems which are encountered in research as well as in industry. With the advent of the supercomputers, more realistic problems can be studied with a wider choice of numerical algorithms. An alternative formulation is presented for viscous incompressible flows. The incompressible Navier-Stokes equations are cast in a velocity/vorticity formulation. This formulation consists of solving the Poisson equations for the velocity components and the vorticity transport equation. Two numerical algorithms for the steady two-dimensional laminar flows are presented. The first method is based on the actual partial differential equations. This uses a finite-difference approximation of the governing equations on a staggered grid. The second method uses a finite element discretization with the vorticity transport equation approximated using a Galerkin approximation and the Poisson equations are obtained using a least squares method. The equations are solved efficiently using Newton's method and a banded direct matrix solver (LINPACK). The method is extended to steady three-dimensional laminar flows and applied to a cubic driven cavity using finite difference schemes and a staggered grid arrangement on a Cartesian mesh. The equations are solved iteratively using a plane zebra relaxation scheme. Currently, a two-dimensional, unsteady algorithm is being developed using a generalized coordinate system. The equations are discretized using a finite-volume approach. This work will then be extended to three-dimensional flows.

  7. Numerical Analysis Of Three Component Induction Logging In Geothermal Reservoirs

    SciTech Connect

    Dr. David L. Alumbaugh

    2002-01-09

    This project is supporting the development of the ''Geo-Bilt'', geothermal electromagnetic-induction logging tool that is being built by ElectroManetic Instruments, Inc. The tool consists of three mutually orthogonal magnetic field antennas, and three-component magnetic field receivers located at different distances from the source. In its current configuration, the source that has a moment aligned along the borehole axis consists of a 1m long solenoid, while the two trans-axial sources consist of 1m by 8cm loops of wire. The receivers are located 2m and 5m away from the center of the sources, and five frequencies from 2 kHz to 40 kHz are being employed. This study is numerically investigating (1) the effect of the borehole on the measurements, and (2) the sensitivity of the tool to fracture zone-geometries that might be encountered in a geothermal field. The benefits of the results are that they will lead to a better understanding of the data that the tool produces during its testing phase and an idea of what the limitations of the tool are.

  8. The association between symbolic and nonsymbolic numerical magnitude processing and mental versus algorithmic subtraction in adults.

    PubMed

    Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert

    2016-03-01

    There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge.

  9. The algorithm of measuring parameters of separate oil streams components

    NASA Astrophysics Data System (ADS)

    Kopteva, A. V.; Voytyuk, I. N.

    2017-02-01

    This paper describes a development in the area of non-contact measurement of moving flows, including mass flow, the number of components and their mass ratios in a multicomponent flow, as well as measurement of flows based on algorithms and functional developed for various industries and production processes. The paper demonstrates that at the core of the proposed systems, there is the physical information field created in the cross section of the moving flow by hard electromagnetic radiation. The substantiation and measurement of the information parameters are performed by the hardware and the software of the automatic measuring system. A new way of statistical pulsation measurements by the radioisotope technique is described, being alternative to the existing stream control methods and allowing improving accuracy of measurements. The basic formula fundamental for the method of calibration characteristics correction is shown.

  10. Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions

    DTIC Science & Technology

    2007-12-06

    problems studied in this project involve numerically solving partial differential equations with either discontinuous or rapidly changing solutions ...REPORT Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions 14. ABSTRACT 16. SECURITY...discontinuous Galerkin finite element methods, for solving partial differential equations with discontinuous or rapidly changing solutions . Algorithm

  11. A fast algorithm for numerical solutions to Fortet's equation

    NASA Astrophysics Data System (ADS)

    Brumen, Gorazd

    2008-10-01

    A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.

  12. A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results

    NASA Astrophysics Data System (ADS)

    Carrano, Charles S.; Rino, Charles L.

    2016-06-01

    We extend the power law phase screen theory for ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component inverse power law spectrum. The two-component model includes, as special cases, an unmodified power law and a modified power law with spectral break that may assume the role of an outer scale, intermediate break scale, or inner scale. As such, it provides a framework for investigating the effects of a spectral break on the scintillation statistics. Using this spectral model, we solve the fourth moment equation governing intensity variations following propagation through two-dimensional field-aligned irregularities in the ionosphere. A specific normalization is invoked that exploits self-similar properties of the structure to achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. The numerical algorithm is validated using new theoretical predictions for the behavior of the scintillation index and intensity correlation length under strong scatter conditions. A series of numerical experiments are conducted to investigate the morphologies of the intensity spectrum, scintillation index, and intensity correlation length as functions of the spectral indices and strength of scatter; retrieve phase screen parameters from intensity scintillation observations; explore the relative contributions to the scintillation due to large- and small-scale ionospheric structures; and quantify the conditions under which a general spectral break will influence the scintillation statistics.

  13. Component-based Hydrologic and Landscape Evolution Models: Interoperability, Standards, and New Algorithms

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2010-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has an ever-growing collection of reusable, plug-and-play components for earth surface process modeling and this includes numerous components for spatial hydrologic and landscape evolution modeling. While components may represent any level of granularity from a simple function to a complete hydrologic model, the optimum level appears to be that of a particular physical process, such as infiltration, evaporation or snowmelt. It is at this level of complexity that researchers are most often interested in "swapping out" one method of modeling a process for another that differs in terms of required input, complexity, accuracy, or computational efficiency. CSDMS model components are designed for maximum reusability and strict adherence to this simple-sounding goal has proven to be a powerful decider when it comes to chosing between a number of different design choices. For example, it determines key aspects of a component's interface, and the need for each component to have or manage its own state variables, input files, output files and help files. As a result, each component can be used either as a stand-alone "submodel" or as a component in some larger model. Components do not, however, need to be written in the same language because the CSDMS project employs a powerful language-interoperability tool called Babel. The purpose of this talk is to share a few lessons learned from the CSDMS project, to provide an overview of the many components that are currently available, and to briefly present performance results from a new fluvial landscape evolution algorithm.

  14. A Parallel Algorithm for Connected Component Labelling of Gray-scale Images on Homogeneous Multicore Architectures

    NASA Astrophysics Data System (ADS)

    Niknam, Mehdi; Thulasiraman, Parimala; Camorlinga, Sergio

    2010-11-01

    Connected component labelling is an essential step in image processing. We provide a parallel version of Suzuki's sequential connected component algorithm in order to speed up the labelling process. Also, we modify the algorithm to enable labelling gray-scale images. Due to the data dependencies in the algorithm we used a method similar to pipeline to exploit parallelism. The parallel algorithm method achieved a speedup of 2.5 for image size of 256 × 256 pixels using 4 processing threads.

  15. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  16. An Adaptive Cauchy Differential Evolution Algorithm for Global Numerical Optimization

    PubMed Central

    Choi, Tae Jong; Ahn, Chang Wook; An, Jinung

    2013-01-01

    Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems. PMID:23935445

  17. An adaptive Cauchy differential evolution algorithm for global numerical optimization.

    PubMed

    Choi, Tae Jong; Ahn, Chang Wook; An, Jinung

    2013-01-01

    Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems.

  18. Numerical Algorithms and Mathematical Software for Linear Control and Estimation Theory.

    DTIC Science & Technology

    1985-05-30

    RD -R157 525 NUMERICAL ALGORITHMS AND MATHEMATICAL SOFTWJARE FOR i/i LINEAR CONTROL AND EST..U) MASSACHUSETTS INST OF TECH CAMBRIDGE STATISTICS...PERIOD COVERED"~~ "ia--Dec. 14, 1981-- LD Numerical Algorithms and Mathematical Dec. 13, 1984*Software for Linear Control and 1.0 Estimation Theory...THIS PAGE (Wten Date Entered) .. :..0 70 FINAL REPORT--ARO Grant DAAG29-82-K-0028,"Numerical Algorithms and Mathematical Software for Linear Control and

  19. Analysis of the distribution of pitch angles in model galactic disks - Numerical methods and algorithms

    NASA Technical Reports Server (NTRS)

    Russell, William S.; Roberts, William W., Jr.

    1993-01-01

    An automated mathematical method capable of successfully isolating the many different features in prototype and observed spiral galaxies and of accurately measuring the pitch angles and lengths of these individual features is developed. The method is applied to analyze the evolution of specific features in a prototype galaxy exhibiting flocculent spiral structure. The mathematical-computational method was separated into two components. Initially, the galaxy was partitioned into dense regions constituting features using two different methods. The results obtained using these two partitioning algorithms were very similar, from which it is inferred that no numerical biasing was evident and that capturing of the features was consistent. Standard least-squares methods underestimated the true slope of the cloud distribution and were incapable of approximating an orientation of 45 deg. The problems were overcome by introducing a superior fit least-squares method, developed with the intention of calculating true orientation rather than a regression line.

  20. Numerical Optimization Algorithms and Software for Systems Biology

    SciTech Connect

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  1. An application of fast algorithms to numerical electromagnetic modeling

    SciTech Connect

    Bezvoda, V.; Segeth, K.

    1987-03-01

    Numerical electromagnetic modeling by the finite-difference or finite-element methods leads to a large sparse system of linear algebraic equations. Fast direct methods, requiring an order of at most q log q arithmetic operations to solve a system of q equations, cannot easily be applied to such a system. This paper describes the iterative application of a fast method, namely cyclic reduction, to the numerical solution of the Helmholtz equation with a piecewise constant imaginary coefficient of the absolute term in a plane domain. By means of numerical tests the advantages and limitations of the method compared with classical direct methods are discussed. The iterative application of the cyclic reduction method is very efficient if one can exploit a known solution of a similar (e.g., simpler) problem as the initial approximation. This makes cyclic reduction a powerful tool in solving the inverse problem by trial-and-error.

  2. An efficient numerical algorithm for transverse impact problems

    NASA Technical Reports Server (NTRS)

    Sankar, B. V.; Sun, C. T.

    1985-01-01

    Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.

  3. Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving

    2014-02-01

    The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.

  4. Numerical comparison of discrete Kalman filter algorithms - Orbit determination case study

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Thornton, C. L.

    1976-01-01

    Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.

  5. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  6. A numeric comparison of variable selection algorithms for supervised learning

    NASA Astrophysics Data System (ADS)

    Palombo, G.; Narsky, I.

    2009-12-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( http://sourceforge.net/projects/statpatrec/). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ("Add N Remove R") implemented in SPR.

  7. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Voigt, Robert G.; Romine, Charles H.

    1988-01-01

    This is a bibliography on numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are also listed.

  8. A bibliography on parallel and vector numerical algorithms

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1987-01-01

    This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also.

  9. Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.

    ERIC Educational Resources Information Center

    Jacquot, Raymond G.; And Others

    1985-01-01

    Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)

  10. A simple and efficient algorithm for connected component labeling in color images

    NASA Astrophysics Data System (ADS)

    Celebi, M. Emre

    2012-03-01

    Connected component labeling is a fundamental operation in binary image processing. A plethora of algorithms have been proposed for this low-level operation with the early ones dating back to the 1960s. However, very few of these algorithms were designed to handle color images. In this paper, we present a simple algorithm for labeling connected components in color images using an approximately linear-time seed fill algorithm. Experiments on a large set of photographic and synthetic images demonstrate that the proposed algorithm provides fast and accurate labeling without requiring excessive stack space.

  11. Fourier analysis of numerical algorithms for the Maxwell equations

    NASA Technical Reports Server (NTRS)

    Liu, Yen

    1993-01-01

    The Fourier method is used to analyze the dispersive, dissipative, and isotropy errors of various spatial and time discretizations applied to the Maxwell equations on multi-dimensional grids. Both Cartesian grids and non-Cartesian grids based on hexagons and tetradecahedra are studied and compared. The numerical errors are quantitatively determined in terms of phase speed, wave number, propagation direction, gridspacings, and CFL number. The study shows that centered schemes are more efficient than upwind schemes. The non-Cartesian grids yield superior isotropy and higher accuracy than the Cartesian ones. For the centered schemes, the staggered grids produce less errors than the unstaggered ones. A new unstaggered scheme which has all the best properties is introduced. The study also demonstrates that a proper choice of time discretization can reduce the overall numerical errors due to the spatial discretization.

  12. A New Efficient Algorithm for the All Sorting Reversals Problem with No Bad Components.

    PubMed

    Wang, Biing-Feng

    2016-01-01

    The problem of finding all reversals that take a permutation one step closer to a target permutation is called the all sorting reversals problem (the ASR problem). For this problem, Siepel had an O(n (3))-time algorithm. Most complications of his algorithm stem from some peculiar structures called bad components. Since bad components are very rare in both real and simulated data, it is practical to study the ASR problem with no bad components. For the ASR problem with no bad components, Swenson et al. gave an O (n(2))-time algorithm. Very recently, Swenson found that their algorithm does not always work. In this paper, a new algorithm is presented for the ASR problem with no bad components. The time complexity is O(n(2)) in the worst case and is linear in the size of input and output in practice.

  13. A very fast algorithm for simultaneously performing connected-component labeling and euler number computing.

    PubMed

    He, Lifeng; Chao, Yuyan

    2015-09-01

    Labeling connected components and calculating the Euler number in a binary image are two fundamental processes for computer vision and pattern recognition. This paper presents an ingenious method for identifying a hole in a binary image in the first scan of connected-component labeling. Our algorithm can perform connected component labeling and Euler number computing simultaneously, and it can also calculate the connected component (object) number and the hole number efficiently. The additional cost for calculating the hole number is only O(H) , where H is the hole number in the image. Our algorithm can be implemented almost in the same way as a conventional equivalent-label-set-based connected-component labeling algorithm. We prove the correctness of our algorithm and use experimental results for various kinds of images to demonstrate the power of our algorithm.

  14. An efficient run-based connected-component labeling algorithm for three-dimensional binary images

    NASA Astrophysics Data System (ADS)

    He, Lifeng; Chao, Yuyan; Suzuki, Kenji; Tang, Wei; Shi, Zhenghao; Nakamura, Tsuyoshi

    2010-08-01

    This paper presents an run-based efficient label-equivalence-based connected-component labeling algorithms for threedimensional binary images. Our algorithm is run-based. Instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we also use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our algorithm is much more efficient than conventional three-dimensional labeling algorithms.

  15. Thrombosis modeling in intracranial aneurysms: a lattice Boltzmann numerical algorithm

    NASA Astrophysics Data System (ADS)

    Ouared, R.; Chopard, B.; Stahl, B.; Rüfenacht, D. A.; Yilmaz, H.; Courbebaisse, G.

    2008-07-01

    The lattice Boltzmann numerical method is applied to model blood flow (plasma and platelets) and clotting in intracranial aneurysms at a mesoscopic level. The dynamics of blood clotting (thrombosis) is governed by mechanical variations of shear stress near wall that influence platelets-wall interactions. Thrombosis starts and grows below a shear rate threshold, and stops above it. Within this assumption, it is possible to account qualitatively well for partial, full or no occlusion of the aneurysm, and to explain why spontaneous thrombosis is more likely to occur in giant aneurysms than in small or medium sized aneurysms.

  16. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    SciTech Connect

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  17. A direct numerical reconstruction algorithm for the 3D Calderón problem

    NASA Astrophysics Data System (ADS)

    Delbary, Fabrice; Hansen, Per Christian; Knudsen, Kim

    2011-04-01

    In three dimensions Calderón's problem was addressed and solved in theory in the 1980s in a series of papers, but only recently the numerical implementation of the algorithm was initiated. The main ingredients in the solution of the problem are complex geometrical optics solutions to the conductivity equation and a (non-physical) scattering transform. The resulting reconstruction algorithm is in principle direct and addresses the full non-linear problem immediately. In this paper we will outline the theoretical reconstruction method and describe how the method can be implemented numerically. We will give three different implementations, and compare their performance on a numerical phantom.

  18. Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning

    NASA Astrophysics Data System (ADS)

    Bradley, Ben K.

    Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and

  19. A stable and efficient numerical algorithm for unconfined aquifer analysis

    SciTech Connect

    Keating, Elizabeth; Zyvoloski, George

    2008-01-01

    The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.

  20. A stable and efficient numerical algorithm for unconfined aquifer analysis.

    PubMed

    Keating, Elizabeth; Zyvoloski, George

    2009-01-01

    The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well.

  1. A novel wavefront-based algorithm for numerical simulation of quasi-optical systems

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoling; Lou, Zheng; Hu, Jie; Zhou, Kangmin; Zuo, Yingxi; Shi, Shengcai

    2016-11-01

    A novel wavefront-based algorithm for the beam simulation of both reflective and refractive optics in a complicated quasi-optical system is proposed. The algorithm can be regarded as the extension to the conventional Physical Optics algorithm to handle dielectrics. Internal reflections are modeled in an accurate fashion, and coating and flossy materials can be treated in a straightforward manner. A parallel implementation of the algorithm has been developed and numerical examples show that the algorithm yields sufficient accuracy by comparing with experimental results, while the computational complexity is much less than the full-wave methods. The algorithm offers an alternative approach to the modeling of quasi-optical systems in addition to the Geometrical Optics modeling and full-wave methods.

  2. Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

    NASA Astrophysics Data System (ADS)

    Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi

    2017-03-01

    Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin–Osher–Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.

  3. Parallel algorithms for geometric connected component labeling on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Belkhale, K. P.; Banerjee, P.

    1992-01-01

    Different algorithms for the geometric connected component labeling (GCCL) problem are defined each of which involves d stages of message passing, for a d-dimensional hypercube. The major idea is that in each stage a hypercube multiprocessor increases its knowledge of domain. The algorithms under consideration include the QUAD algorithm for small number of processors and the Overlap Quad algorithm for large number of processors, subject to the locality of the connected sets. These algorithms differ in their run time, memory requirements, and message complexity. They were implemented on an Intel iPSC2/D4/MX hypercube.

  4. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of

  5. A numerical comparison of discrete Kalman filtering algorithms - An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    An improved Kalman filter algorithm based on a modified Givens matrix triangularization technique is proposed for solving a nonstationary discrete-time linear filtering problem. The proposed U-D covariance factorization filter uses orthogonal transformation technique; measurement and time updating of the U-D factors involve separate application of Gentleman's fast square-root-free Givens rotations. Numerical stability and accuracy of the algorithm are compared with those of the conventional and stabilized Kalman filters and the Potter-Schmidt square-root filter, by applying these techniques to a realistic planetary navigation problem (orbit determination for the Saturn approach phase of the Mariner Jupiter-Saturn Mission, 1977). The new algorithm is shown to combine the numerical precision of square root filtering with the efficiency of the original Kalman algorithm.

  6. Numerical optimization algorithm for rotationally invariant multi-orbital slave-boson method

    NASA Astrophysics Data System (ADS)

    Quan, Ya-Min; Wang, Qing-wei; Liu, Da-Yong; Yu, Xiang-Long; Zou, Liang-Jian

    2015-06-01

    We develop a generalized numerical optimization algorithm for the rotationally invariant multi-orbital slave boson approach, which is applicable for arbitrary boundary constraints of high-dimensional objective function by combining several classical optimization techniques. After constructing the calculation architecture of rotationally invariant multi-orbital slave boson model, we apply this optimization algorithm to find the stable ground state and magnetic configuration of two-orbital Hubbard models. The numerical results are consistent with available solutions, confirming the correctness and accuracy of our present algorithm. Furthermore, we utilize it to explore the effects of the transverse Hund's coupling terms on metal-insulator transition, orbital selective Mott phase and magnetism. These results show the quick convergency and robust stable character of our algorithm in searching the optimized solution of strongly correlated electron systems.

  7. Hardware acceleration based connected component labeling algorithm in real-time ATR system

    NASA Astrophysics Data System (ADS)

    Zhao, Fei; Zhang, Zhi-yong

    2013-03-01

    Aims at the requirement of real-time processing in Real-Time Automatic Target Recognition(RTATR) system, this paper presents a hardware acceleration based two-scan connected-component labeling algorithm. Conventional pixel and run based algorithm's merits are combined, in the first scan, the pixel is processed scan unit while line as label unit, label equivalences are recorded while scanning the image by pixel. Lines with provisional label are outputted as the connected component labeling result. Then the union-find algorithm is used for resolving label equivalences and finds the representative label for each provisional label after the first scan. The labels are replaced in the second scan to complete the connected-component labeling. Experiments on RTATR platform demonstrate that the hardware acceleration implementation of algorithm reaches a higher performance and efficiency and consumes few resources. The implementation of proposed algorithm can meet the demand of real-time processing, and possesses a better practicability.

  8. Multislice algorithms revisited: solving the Schrödinger equation numerically for imaging with electrons.

    PubMed

    Wacker, C; Schröder, R R

    2015-04-01

    For a long time, the high-energy approximation was sufficient for any image simulation in electron microscopy. This changed with the advent of aberration correctors that allow high-resolution imaging at low electron energies. To deal with this fact, we present a numerical solution of the exact Schrödinger equation that is novel in the field of electron microscopy. Furthermore, we investigate systematically the advantages and problems of several multislice algorithms, especially the real-space algorithms.

  9. Numerical study of variational data assimilation algorithms based on decomposition methods in atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Antokhin, Pavel

    2016-11-01

    The performance of a variational data assimilation algorithm for a transport and transformation model of atmospheric chemical composition is studied numerically in the case where the emission inventories are missing while there are additional in situ indirect concentration measurements. The algorithm is based on decomposition and splitting methods with a direct solution of the data assimilation problems at the splitting stages. This design allows avoiding iterative processes and working in real-time. In numerical experiments we study the sensitivity of data assimilation to measurement data quantity and quality.

  10. Fast algorithms for numerical, conservative, and entropy approximations of the Fokker-Planck-Landau equation

    SciTech Connect

    Buet, C.; Cordier; Degond, P.; Lemou, M.

    1997-05-15

    We present fast numerical algorithms to solve the nonlinear Fokker-Planck-Landau equation in 3D velocity space. The discretization of the collision operator preserves the properties required by the physical nature of the Fokker-Planck-Landau equation, such as the conservation of mass, momentum, and energy, the decay of the entropy, and the fact that the steady states are Maxwellians. At the end of this paper, we give numerical results illustrating the efficiency of these fast algorithms in terms of accuracy and CPU time. 20 refs., 7 figs.

  11. Efficient multi-value connected component labeling algorithm and its ASIC design

    NASA Astrophysics Data System (ADS)

    Sang, Hongshi; Zhang, Jing; Zhang, Tianxu

    2007-12-01

    An efficient connected component labeling algorithm for multi-value image is proposed in this paper. The algorithm is simple and inerratic suitable for hardware design. A one-dimensional array is used to store equivalence pairs. Record organization of equivalence table is advantageously to find the minimum equivalent label, and can shrink time on processing equivalence table. A pipelined architecture of the algorithm is described to enhance system performance.

  12. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  13. An improved algorithm for labeling connected components in a binary image

    NASA Astrophysics Data System (ADS)

    Yang, Xue D.

    1989-03-01

    In this note, we present an improved algorithm to Schwartz, Sharir and Siegel's algorithm for labeling the connected components of a binary image. Our algorithm uses the same bracket marking mechanisms as is used in the original algorithm to associate equivalent groups. The main improvement of our algorithm is that it reduces the three scans on each line required by the original algorithm in its first pass into only one scan by using a recursive group-boundary dynamic tracking technique, while maintaining the computation on each pixel during scan still a constant time. This algorithm is fast enough to handle images in real time and simple enough to allow an easy and very economical hardware implementation.

  14. A Fast Multi-Object Extraction Algorithm Based on Cell-Based Connected Components Labeling

    NASA Astrophysics Data System (ADS)

    Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku

    We describe a cell-based connected component labeling algorithm to calculate the 0th and 1st moment features as the attributes for labeled regions. These can be used to indicate their sizes and positions for multi-object extraction. Based on the additivity in moment features, the cell-based labeling algorithm can label divided cells of a certain size in an image by scanning the image only once to obtain the moment features of the labeled regions with remarkably reduced computational complexity and memory consumption for labeling. Our algorithm is a simple-one-time-scan cell-based labeling algorithm, which is suitable for hardware and parallel implementation. We also compared it with conventional labeling algorithms. The experimental results showed that our algorithm is faster than conventional raster-scan labeling algorithms.

  15. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  16. Numerical nonwavefront-guided algorithm for expansion or recentration of the optical zone

    NASA Astrophysics Data System (ADS)

    Arba Mosquera, Samuel; Verma, Shwetabh

    2014-08-01

    Complications may arise due to the decentered ablations during refractive surgery, resulting from human or mechanical errors. Decentration may cause over-/under-corrections, with patients complaining about seeing glares and halos after the procedure. Customized wavefront-guided treatments are often used to design retreatment procedures. However, due to the limitations of wavefront sensors in precisely measuring very large aberrations, some extreme cases may suffer when retreated with wavefront-guided treatments. We propose a simple and inexpensive numerical (nonwavefront-guided) algorithm to recenter the optical zone (OZ) and to correct the refractive error with minimal tissue removal. Due to its tissue-saving capabilities, this method can benefit patients with critical residual corneal thickness. Based on the reconstruction of ablation achieved in the first surgical procedure, we calculate a target ablation (by manipulating the achieved OZ) with adequate centration and an OZ sufficient enough to envelope the achieved ablation. The net ablation map for the retreatment procedure is calculated from the achieved and target ablations and is suitable to expand, recenter, and modulate the lower-order refractive components in a retreatment procedure. The results of our simulations suggest minimal tissue removal with OZ centration and expansion. Enlarging the OZ implies correcting spherical aberrations, whereas inducing centration implies correcting coma. This method shows the potential to improve visual outcomes in extreme cases of retreatment, possibly serving as an uncomplicated and inexpensive alternative to wavefront-guided retreatments.

  17. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  18. Numerical convergence and interpretation of the fuzzy c-shells clustering algorithm.

    PubMed

    Bezdek, J C; Hathaway, R J

    1992-01-01

    R. N. Dave's (1990) version of fuzzy c-shells is an iterative clustering algorithm which requires the application of Newton's method or a similar general optimization technique at each half step in any sequence of iterates for minimizing the associated objective function. An important computational question concerns the accuracy of the solution required at each half step within the overall iteration. The general convergence theory for grouped coordination minimization is applied to this question to show that numerically exact solution of the half-step subproblems in Dave's algorithm is not necessary. One iteration of Newton's method in each coordinate minimization half step yields a sequence obtained using the fuzzy c-shells algorithm with numerically exact coordinate minimization at each half step. It is shown that fuzzy c-shells generates hyperspherical prototypes to the clusters it finds for certain special cases of the measure of dissimilarity used.

  19. Optimization of an algorithm for measurements of velocity vector components using a three-wire sensor.

    PubMed

    Ligeza, P; Socha, K

    2007-10-01

    Hot-wire measurements of velocity vector components use a sensor with three orthogonal wires, taking advantage of an anisotropic effect of wire sensitivity. The sensor is connected to a three-channel anemometric circuit and a data acquisition and processing system. Velocity vector components are obtained from measurement signals, using a modified algorithm for measuring velocity vector components enabling the minimization of measurement errors described in this paper. The standard deviation of the relative error was significantly reduced in comparison with the classical algorithm.

  20. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  1. Variational Bayesian approximation with scale mixture prior for inverse problems: A numerical comparison between three algorithms

    NASA Astrophysics Data System (ADS)

    Gharsalli, Leila; Mohammad-Djafari, Ali; Fraysse, Aurélia; Rodet, Thomas

    2013-08-01

    Our aim is to solve a linear inverse problem using various methods based on the Variational Bayesian Approximation (VBA). We choose to take sparsity into account via a scale mixture prior, more precisely a student-t model. The joint posterior of the unknown and hidden variable of the mixtures is approximated via the VBA. To do this approximation, classically the alternate algorithm is used. But this method is not the most efficient. Recently other optimization algorithms have been proposed; indeed classical iterative algorithms of optimization such as the steepest descent method and the conjugate gradient have been studied in the space of the probability densities involved in the Bayesian methodology to treat this problem. The main object of this work is to present these three algorithms and a numerical comparison of their performances.

  2. Numerical investigation of acoustic field in enclosures: Evaluation of active and reactive components of sound intensity

    NASA Astrophysics Data System (ADS)

    Meissner, Mirosław

    2015-03-01

    The paper focuses on a theoretical description and numerical evaluation of active and reactive components of sound intensity in enclosed spaces. As the study was dedicated to low-frequency room responses, a modal expansion of the sound pressure was used. Numerical simulations have shown that the presence of energy vortices whose size and distribution depend on the character of the room response is a distinctive feature of the active intensity field. When several modes with frequencies close to a source frequency are excited, the vortices within the room are positioned irregularly. However, if the response is determined by one or two dominant modes, a regular distribution of vortices in the room can be observed. The irrotational component of the active intensity was found using the Helmholtz decomposition theorem. As was evidenced by numerical simulations, the suppression of the vortical flow of sound energy in the nearfield permits obtaining a clear image of the sound source.

  3. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  4. Stochastic models and numerical algorithms for a class of regulatory gene networks.

    PubMed

    Fournier, Thomas; Gabriel, Jean-Pierre; Pasquier, Jerôme; Mazza, Christian; Galbete, José; Mermod, Nicolas

    2009-08-01

    Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.

  5. A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients

    SciTech Connect

    Alex, Arne; Delft, Jan von; Kalus, Matthias; Huckleberry, Alan

    2011-02-15

    We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).

  6. Color interpolation algorithm of CCD based on green components and signal correlation

    NASA Astrophysics Data System (ADS)

    Liang, Xiaofen; Qiao, Weidong; Yang, Jianfeng; Xue, Bin; Qin, Jia

    2013-09-01

    Signal CCD/CMOS sensors capture image information by covering the sensor surface with a color filter array(CFA). For each pixel, only one of three primary colors(red, green and blue) can pass through the color filter array(CFA). The other two missing color components are estimated by the values of the surrounding pixels. In Bayer array, the green components are half of the total pixels, but both red pixel and blue pixel components are quarter, so green components contain more information, which can be reference to color interpolation of red components and blue components. Based on this principle, in this paper, a simple and effective color interpolation algorithm based on green components and signal correlation for Bayer pattern images was proposed. The first step is to interpolate R, G and B components using the method-bilinear interpolation. The second step is to revise the results of bilinear interpolation by adding some green components on the results of bilinear interpolation. The calculation of the values to be added should consider the influence of correlation between the three channels. There are two major contributions in the paper. The first one is to demosaick G component more precisely. The second one is the spectral-spatial correlations between the three color channels is taken into consideration. At last, through MATLAB simulation experiments, experimental pictures and quantitative data for performance evaluation-Peak Signal to Noise Ratio(PSNR) were gotten. The results of simulation experiments show, compared with other color interpolation algorithms, the proposed algorithm performs well in both visual perception and PSNR measurement. And the proposed algorithm does not increase the complexity of calculation but ensures the real-time of system. Theory and experiments show the method is reasonable and has important engineering significance.

  7. Recent numerical and algorithmic advances within the volume tracking framework for modeling interfacial flows

    DOE PAGES

    François, Marianne M.

    2015-05-28

    A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less

  8. Analysis of V-cycle multigrid algorithms for forms defined by numerical quadrature

    SciTech Connect

    Bramble, J.H. . Dept. of Mathematics); Goldstein, C.I.; Pasciak, J.E. . Applied Mathematics Dept.)

    1994-05-01

    The authors describe and analyze certain V-cycle multigrid algorithms with forms defined by numerical quadrature applied to the approximation of symmetric second-order elliptic boundary value problems. This approach can be used for the efficient solution of finite element systems resulting from numerical quadrature as well as systems arising from finite difference discretizations. The results are based on a regularity free theory and hence apply to meshes with local grid refinement as well as the quasi-uniform case. It is shown that uniform (independent of the number of levels) convergence rates often hold for appropriately defined V-cycle algorithms with as few as one smoothing per grid. These results hold even on applications without full elliptic regularity, e.g., a domain in R[sup 2] with a crack.

  9. Particle-In-Cell Multi-Algorithm Numerical Test-Bed

    NASA Astrophysics Data System (ADS)

    Meyers, M. D.; Yu, P.; Tableman, A.; Decyk, V. K.; Mori, W. B.

    2015-11-01

    We describe a numerical test-bed that allows for the direct comparison of different numerical simulation schemes using only a single code. It is built from the UPIC Framework, which is a set of codes and modules for constructing parallel PIC codes. In this test-bed code, Maxwell's equations are solved in Fourier space in two dimensions. One can readily examine the numerical properties of a real space finite difference scheme by including its operators' Fourier space representations in the Maxwell solver. The fields can be defined at the same location in a simulation cell or can be offset appropriately by half-cells, as in the Yee finite difference time domain scheme. This allows for the accurate comparison of numerical properties (dispersion relations, numerical stability, etc.) across finite difference schemes, or against the original spectral scheme. We have also included different options for the charge and current deposits, including a strict charge conserving current deposit. The test-bed also includes options for studying the analytic time domain scheme, which eliminates numerical dispersion errors in vacuum. We will show examples from the test-bed that illustrate how the properties of some numerical instabilities vary between different PIC algorithms. Work supported by the NSF grant ACI 1339893 and DOE grant DE-SC0008491.

  10. Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system

    NASA Astrophysics Data System (ADS)

    Duran, Ahmet; Tuncel, Mehmet

    2016-10-01

    It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.

  11. A universal framework for non-deteriorating time-domain numerical algorithms in Maxwell's electrodynamics

    NASA Astrophysics Data System (ADS)

    Fedoseyev, A.; Kansa, E. J.; Tsynkov, S.; Petropavlovskiy, S.; Osintcev, M.; Shumlak, U.; Henshaw, W. D.

    2016-10-01

    We present the implementation of the Lacuna method, that removes a key diffculty that currently hampers many existing methods for computing unsteady electromagnetic waves on unbounded regions. Numerical accuracy and/or stability may deterio-rate over long times due to the treatment of artificial outer boundaries. We describe a developed universal algorithm and software that correct this problem by employing the Huygens' principle and lacunae of Maxwell's equations. The algorithm provides a temporally uniform guaranteed error bound (no deterioration at all), and the software will enable robust electromagnetic simulations in a high-performance computing environment. The methodology applies to any geometry, any scheme, and any boundary condition. It eliminates the long-time deterioration regardless of its origin and how it manifests itself. In retrospect, the lacunae method was first proposed by V. Ryaben'kii and subsequently developed by S. Tsynkov. We have completed development of an innovative numerical methodology for high fidelity error-controlled modeling of a broad variety of electromagnetic and other wave phenomena. Proof-of-concept 3D computations have been conducted that con-vincingly demonstrate the feasibility and effciency of the proposed approach. Our algorithms are being implemented as robust commercial software tools in a standalone module to be combined with existing numerical schemes in several widely used computational electromagnetic codes.

  12. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  13. Optimal principal component analysis-based numerical phase aberration compensation method for digital holography.

    PubMed

    Sun, Jiasong; Chen, Qian; Zhang, Yuzhen; Zuo, Chao

    2016-03-15

    In this Letter, an accurate and highly efficient numerical phase aberration compensation method is proposed for digital holographic microscopy. Considering that most parts of the phase aberration resides in the low spatial frequency domain, a Fourier-domain mask is introduced to extract the aberrated frequency components, while rejecting components that are unrelated to the phase aberration estimation. Principal component analysis (PCA) is then performed only on the reduced-sized spectrum, and the aberration terms can be extracted from the first principal component obtained. Finally, by oversampling the reduced-sized aberration terms, the precise phase aberration map is obtained and thus can be compensated by multiplying with its conjugation. Because the phase aberration is estimated from the limited but more relevant raw data, the compensation precision is improved and meanwhile the computation time can be significantly reduced. Experimental results demonstrate that our proposed technique could achieve both high compensating accuracy and robustness compared with other developed compensation methods.

  14. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  15. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  16. Study on the optimal algorithm prediction of corn leaf component information based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu

    2016-09-01

    Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.

  17. Parametric effects of CFL number and artificial smoothing on numerical solutions using implicit approximate factorization algorithm

    NASA Technical Reports Server (NTRS)

    Daso, E. O.

    1986-01-01

    An implicit approximate factorization algorithm is employed to quantify the parametric effects of Courant number and artificial smoothing on numerical solutions of the unsteady 3-D Euler equations for a windmilling propeller (low speed) flow field. The results show that propeller global or performance chracteristics vary strongly with Courant number and artificial dissipation parameters, though the variation is such less severe at high Courant numbers. Candidate sets of Courant number and dissipation parameters could result in parameter-dependent solutions. Parameter-independent numerical solutions can be obtained if low values of the dissipation parameter-time step ratio are used in the computations. Furthermore, it is realized that too much artificial damping can degrade numerical stability. Finally, it is demonstrated that highly resolved meshes may, in some cases, delay convergence, thereby suggesting some optimum cell size for a given flow solution. It is suspected that improper boundary treatment may account for the cell size constraint.

  18. Coordinate Systems, Numerical Objects and Algorithmic Operations of Computational Experiment in Fluid Mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily

    2016-02-01

    The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.

  19. [Fetal electrocardiogram extraction based on independent component analysis and quantum particle swarm optimizer algorithm].

    PubMed

    Du, Yanqin; Huang, Hua

    2011-10-01

    Fetal electrocardiogram (FECG) is an objective index of the activities of fetal cardiac electrophysiology. The acquired FECG is interfered by maternal electrocardiogram (MECG). How to extract the fetus ECG quickly and effectively has become an important research topic. During the non-invasive FECG extraction algorithms, independent component analysis(ICA) algorithm is considered as the best method, but the existing algorithms of obtaining the decomposition of the convergence properties of the matrix do not work effectively. Quantum particle swarm optimization (QPSO) is an intelligent optimization algorithm converging in the global. In order to extract the FECG signal effectively and quickly, we propose a method combining ICA and QPSO. The results show that this approach can extract the useful signal more clearly and accurately than other non-invasive methods.

  20. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.

  1. Numerical algorithms for computations of feedback laws arising in control of flexible systems

    NASA Technical Reports Server (NTRS)

    Lasiecka, Irena

    1989-01-01

    Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.

  2. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  3. Numerical stability of relativistic beam multidimensional PIC simulations employing the Esirkepov algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc

    2013-09-01

    Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.

  4. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.

    PubMed

    Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution.

  5. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization

    PubMed Central

    Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  6. Determining residual reduction algorithm kinematic tracking weights for a sidestep cut via numerical optimization.

    PubMed

    Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I

    2016-12-01

    Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.

  7. Comparative Study of Algorithms for the Numerical Simulation of Lattice QCD

    SciTech Connect

    Luz, Fernando H. P.; Mendes, Tereza

    2010-11-12

    Large-scale numerical simulations are the prime method for a nonperturbative study of QCD from first principles. Although the lattice simulation of the pure-gauge (or quenched-QCD) case may be performed very efficiently on parallel machines, there are several additional difficulties in the simulation of the full-QCD case, i.e. when dynamical quark effects are taken into account. We discuss the main aspects of full-QCD simulations, describing the most common algorithms. We present a comparative analysis of performance for two versions of the hybrid Monte Carlo method (the so-called R and RHMC algorithms), as provided in the MILC software package. We consider two degenerate flavors of light quarks in the staggered formulation, having in mind the case of finite-temperature QCD.

  8. A semi-numerical algorithm for instability of compressible multilayered structures

    NASA Astrophysics Data System (ADS)

    Tang, Shan; Yang, Yang; Peng, Xiang He; Liu, Wing Kam; Huang, Xiao Xu; Elkhodary, Khalil

    2015-07-01

    A computational method is proposed for the analysis and prediction of instability (wrinkling or necking) of multilayered compressible plates and sheets made by metals or polymers under plane strain conditions. In previous works, a basic assumption (or a physical argument) that has been frequently made is that materials are incompressible to simplify mathematical derivations. To account for the compressibility of metals and polymers (the lower Poisson's ratio leads to the more compressible material), we propose a combined semi-numerical algorithm and finite element method for instability analysis. Our proposed algorithm is herein verified by comparing its predictions with published results in literature for thin films with polymer/metal substrates and for polymer/metal systems. The new combined method is then used to predict the effects of compressibility on instability behaviors. Results suggest potential utility for compressibility in the design of multilayered structures.

  9. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    SciTech Connect

    Bu Sunyoung Huang Jingfang Boyer, Treavor H. Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  10. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.

    PubMed

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  11. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

    PubMed Central

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-01-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

  12. New Concepts in Breast Cancer Emerge from Analyzing Clinical Data Using Numerical Algorithms

    PubMed Central

    Retsky, Michael

    2009-01-01

    A small international group has recently challenged fundamental concepts in breast cancer. As a guiding principle in therapy, it has long been assumed that breast cancer growth is continuous. However, this group suggests tumor growth commonly includes extended periods of quasi-stable dormancy. Furthermore, surgery to remove the primary tumor often awakens distant dormant micrometastases. Accordingly, over half of all relapses in breast cancer are accelerated in this manner. This paper describes how a numerical algorithm was used to come to these conclusions. Based on these findings, a dormancy preservation therapy is proposed. PMID:19440287

  13. Numerical arc segmentation algorithm for a radio conference - A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    A detailed description of a Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software package for communication satellite systems planning is presented. This software provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC - 88) on the use of the GEO and the planning of space services utilizing GEO. The features of the NASARC software package are described, and detailed information is given about the function of each of the four NASARC program modules. The results of a sample world scenario are presented and discussed.

  14. Numerical simulation of three-dimensional unsteady vortex flow using a compact vorticity-velocity algorithm

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Grosch, C. E.; Rose, M. E.; Spall, R. E.

    1987-01-01

    A numerical algorithm is presented which is used to solve the unsteady, fully three-dimensional, incompressible Navier-Stokes equations in vorticity-velocity variables. A discussion of the discrete approximation scheme is presented as well as the solution method used to solve the resulting algebraic set of difference equations. Second order spatial and temporal accuracy is verified through solution comparisons with exact results obtained for steady three-dimensional stagnation point flow and unsteady axisymmetric vortex spin-up. In addition, results are presented for the problem of unsteady bubble-type vortex breakdown with emphasis on internal bubble dynamics and structure.

  15. Model of stacked long Josephson junctions: Parallel algorithm and numerical results in case of weak coupling

    NASA Astrophysics Data System (ADS)

    Zemlyanaya, E. V.; Bashashin, M. V.; Rahmonov, I. R.; Shukrinov, Yu. M.; Atanasova, P. Kh.; Volokhova, A. V.

    2016-10-01

    We consider a model of system of long Josephson junctions (LJJ) with inductive and capacitive coupling. Corresponding system of nonlinear partial differential equations is solved by means of the standard three-point finite-difference approximation in the spatial coordinate and utilizing the Runge-Kutta method for solution of the resulting Cauchy problem. A parallel algorithm is developed and implemented on a basis of the MPI (Message Passing Interface) technology. Effect of the coupling between the JJs on the properties of LJJ system is demonstrated. Numerical results are discussed from the viewpoint of effectiveness of parallel implementation.

  16. Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm

    NASA Technical Reports Server (NTRS)

    Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.

    1990-01-01

    Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  17. Zone Based Hybrid Feature Extraction Algorithm for Handwritten Numeral Recognition of South Indian Scripts

    NASA Astrophysics Data System (ADS)

    Rajashekararadhya, S. V.; Ranjan, P. Vanaja

    India is a multi-lingual multi script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we propose a zone based hybrid feature extraction algorithm scheme towards the recognition of off-line handwritten numerals of south Indian scripts. The character centroid is computed and the image (character/numeral) is further divided in to n equal zones. Average distance and Average angle from the character centroid to the pixels present in the zone are computed (two features). Similarly zone centroid is computed (two features). This procedure is repeated sequentially for all the zones/grids/boxes present in the numeral image. There could be some zones that are empty, and then the value of that particular zone image value in the feature vector is zero. Finally 4*n such features are extracted. Nearest neighbor classifier is used for subsequent classification and recognition purpose. We obtained 97.55 %, 94 %, 92.5% and 95.2 % recognition rate for Kannada, Telugu, Tamil and Malayalam numerals respectively.

  18. Plaque components affect wall stress in stented human carotid artery: A numerical study

    NASA Astrophysics Data System (ADS)

    Fan, Zhen-Min; Liu, Xiao; Du, Cheng-Fei; Sun, An-Qiang; Zhang, Nan; Fan, Zhan-Ming; Fan, Yu-Bo; Deng, Xiao-Yan

    2016-12-01

    Carotid artery stenting presents challenges of in-stent restenosis and late thrombosis, which are caused primarily by alterations in the mechanical environment of the artery after stent implantation. The present study constructed patient-specific carotid arterial bifurcation models with lipid pools and calcified components based on magnetic resonance imaging. We numerically analyzed the effects of multicomponent plaques on the distributions of von Mises stresses (VMSs) in the patient-specific models after stenting. The results showed that when a stent was deployed, the large soft lipid pool in atherosclerotic plaques cushioned the host artery and reduced the stress within the arterial wall; however, this resulted in a sharp increase of VMS in the fibrous cap. When compared with the lipid pool, the presence of the calcified components led to slightly increased stresses on the luminal surface. However, when a calcification was located close to the luminal surface of the host artery and the stenosis, the local VMS was elevated. Overall, compared with calcified components, large lipid pools severely damaged the host artery after stenting. Furthermore, damage due to the calcified component may depend on location.

  19. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Yu, Simin; Cai, Jianping

    2016-12-01

    Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

  20. A numerical algorithm with preference statements to evaluate the performance of scientists.

    PubMed

    Ricker, Martin

    Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.

  1. Conservative numerical simulation of multi-component transport in two-dimensional unsteady shallow water flow

    NASA Astrophysics Data System (ADS)

    Murillo, J.; García-Navarro, P.; Burguete, J.

    2009-08-01

    An explicit finite volume model to simulate two-dimensional shallow water flow with multi-component transport is presented. The governing system of coupled conservation laws demands numerical techniques to avoid unrealistic values of the transported scalars that cannot be avoided by decreasing the size of the time step. The presence of non conservative products such as bed slope and friction terms, and other source terms like diffusion and reaction, can make necessary the reduction of the time step given by the Courant number. A suitable flux difference redistribution that prevents instability and ensures conservation at all times is used to deal with the non-conservative terms and becomes necessary in cases of transient boundaries over dry bed. The resulting method belongs to the category of well-balanced Roe schemes and is able to handle steady cases with flow in motion. Test cases with exact solution, including transient boundaries, bed slope, friction, and reaction terms are used to validate the numerical scheme. Laboratory experiments are used to validate the techniques when dealing with complex systems as the κ-ɛ model. The results of the proposed numerical schemes are compared with the ones obtained when using uncoupled formulations.

  2. Numerical analysis of the flexible roll forming of an automotive component from high strength steel

    NASA Astrophysics Data System (ADS)

    Abeyrathna, B.; Abvabi, A.; Rolfe, B.; Taube, R.; Weiss, M.

    2016-11-01

    Conventional roll forming is limited to components with uniform cross-section; the recently developed flexible roll forming (FRF) process can be used to form components which vary in both width and depth. It has been suggested that this process can be used to manufacture automotive components from Ultra High Strength Steel (UHSS) which has limited tensile elongation. In the flexible roll forming process, the pre-cut blank is fed through a set of rolls; some rolls are computer-numerically controlled (CNC) to follow the 3D contours of the part and hence parts with a variable cross-section can be produced. This paper introduces a new flexible roll forming technique which can be used to form a complex shape with the minimum tooling requirements. In this method, the pre-cut blank is held between two dies and the whole system moves back and forth past CNC forming rolls. The forming roll changes its angle and position in each pass to incrementally form the part. In this work, the process is simulated using the commercial software package Copra FEA. The distribution of total strain and final part quality are investigated as well as related shape defects observed in the process. Different tooling concepts are used to improve the strain distribution and hence the part quality.

  3. Investigating groundwater flow components in an Alpine relict rock glacier (Austria) using a numerical model

    NASA Astrophysics Data System (ADS)

    Pauritsch, Marcus; Wagner, Thomas; Winkler, Gerfried; Birk, Steffen

    2017-03-01

    Relict rock glaciers are complex hydrogeological systems that might act as relevant groundwater storages; therefore, the discharge behavior of these alpine landforms needs to be better understood. Hydrogeological and geophysical investigations at a relict rock glacier in the Niedere Tauern Range (Austria) reveal a slow and fast flow component that appear to be related to the heterogeneous structure of the aquifer. A numerical groundwater flow model was used to indicate the influence of important internal structures such as layering, preferential flow paths and aquifer-base topography. Discharge dynamics can be reproduced reasonably by both introducing layers of strongly different hydraulic conductivities or by a network of highly conductive channels within a low-conductivity zone. Moreover, the topography of the aquifer base influences the discharge dynamics, which can be observed particularly in simply structured aquifers. Hydraulic conductivity differences of three orders of magnitude are required to account for the observed discharge behavior: a highly conductive layer and/or channel network controlling the fast and flashy spring responses to recharge events, as opposed to less conductive sediment accumulations sustaining the long-term base flow. The results show that the hydraulic behavior of this relict rock glacier and likely that of others can be adequately represented by two aquifer components. However, the attempt to characterize the two components by inverse modeling results in ambiguity of internal structures when solely discharge data are available.

  4. Investigating groundwater flow components in an Alpine relict rock glacier (Austria) using a numerical model

    NASA Astrophysics Data System (ADS)

    Pauritsch, Marcus; Wagner, Thomas; Winkler, Gerfried; Birk, Steffen

    2016-11-01

    Relict rock glaciers are complex hydrogeological systems that might act as relevant groundwater storages; therefore, the discharge behavior of these alpine landforms needs to be better understood. Hydrogeological and geophysical investigations at a relict rock glacier in the Niedere Tauern Range (Austria) reveal a slow and fast flow component that appear to be related to the heterogeneous structure of the aquifer. A numerical groundwater flow model was used to indicate the influence of important internal structures such as layering, preferential flow paths and aquifer-base topography. Discharge dynamics can be reproduced reasonably by both introducing layers of strongly different hydraulic conductivities or by a network of highly conductive channels within a low-conductivity zone. Moreover, the topography of the aquifer base influences the discharge dynamics, which can be observed particularly in simply structured aquifers. Hydraulic conductivity differences of three orders of magnitude are required to account for the observed discharge behavior: a highly conductive layer and/or channel network controlling the fast and flashy spring responses to recharge events, as opposed to less conductive sediment accumulations sustaining the long-term base flow. The results show that the hydraulic behavior of this relict rock glacier and likely that of others can be adequately represented by two aquifer components. However, the attempt to characterize the two components by inverse modeling results in ambiguity of internal structures when solely discharge data are available.

  5. Numerical simulation of alteration of sodium bentonite by diffusion of ionic groundwater components

    SciTech Connect

    Jacobsen, J.S.; Carnahan, C.L.

    1987-12-01

    Experiments measuring the movement of trace amounts of radionuclides through compacted bentonite have typically used unaltered bentonite. Models based on experiments such as these may not lead to accurate predictions of the migration through altered or partially altered bentonite of radionuclides that undergo ion exchange. To address this problem, we have modified an existing transport code to include ion exchange and aqueous complexation reactions. The code is thus able to simulate the diffusion of major ionic groundwater components through bentonite and reactions between the bentonite and groundwater. Numerical simulations have been made to investigate the conversion of sodium bentonite to calcium bentonite for a reference groundwater characteristic of deep granitic formations. 20 refs., 2 figs., 2 tabs.

  6. Comparing numerical and analytical approaches to strongly interacting two-component mixtures in one dimensional traps

    NASA Astrophysics Data System (ADS)

    Bellotti, Filipe F.; Dehkharghani, Amin S.; Zinner, Nikolaj T.

    2017-02-01

    We investigate one-dimensional harmonically trapped two-component systems for repulsive interaction strengths ranging from the non-interacting to the strongly interacting regime for Fermi-Fermi mixtures. A new and powerful mapping between the interaction strength parameters from a continuous Hamiltonian and a discrete lattice Hamiltonian is derived. As an example, we show that this mapping does not depend neither on the state of the system nor on the number of particles. Energies, density profiles and correlation functions are obtained both numerically (density matrix renormalization group (DMRG) and exact diagonalization) and analytically. Since DMRG results do not converge as the interaction strength is increased, analytical solutions are used as a benchmark to identify the point where these calculations become unstable. We use the proposed mapping to set a quantitative limit on the interaction parameter of a discrete lattice Hamiltonian above which DMRG gives unrealistic results.

  7. Accelerated convergence of neural network system identification algorithms via principal component analysis

    NASA Astrophysics Data System (ADS)

    Hyland, David C.; Davis, Lawrence D.; Denoyer, Keith K.

    1998-12-01

    While significant theoretical and experimental progress has been made in the development of neural network-based systems for the autonomous identification and control of space platforms, there remain important unresolved issues associated with the reliable prediction of convergence speed and the avoidance of inordinately slow convergence. To speed convergence of neural identifiers, we introduce the preprocessing of identifier inputs using Principal Component Analysis (PCA) algorithms. Which automatically transform the neural identifier's external inputs so as to make the correlation matrix identity, resulting in enormous improvements in the convergence speed of the neural identifier. From a study of several such algorithms, we developed a new PCA approach which exhibits excellent convergence properties, insensitivity to noise and reliable accuracy.

  8. Bearing fault component identification using information gain and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Vinay, Vakharia; Kumar, Gupta Vijay; Kumar, Kankar Pavan

    2015-04-01

    In the present study an attempt has been made to identify various bearing faults using machine learning algorithm. Vibration signals obtained from faults in inner race, outer race, rolling element and combined faults are considered. Raw vibration signal cannot be used directly since vibration signals are masked by noise. To overcome this difficulty combined time frequency domain method such as wavelet transform is used. Further wavelet selection criteria based on minimum permutation entropy is employed to select most appropriate base wavelet. Statistical features from selected wavelet coefficients are calculated to form feature vector. To reduce size of feature vector information gain attribute selection method is employed. Modified feature set is fed in to machine learning algorithm such as random forest and self-organizing map for getting maximize fault identification efficiency. Results obtained revealed that attribute selection method shows improvement in fault identification accuracy of bearing components.

  9. Numerical linked-cluster algorithms. I. Spin systems on square, triangular, and kagomé lattices.

    PubMed

    Rigol, Marcos; Bryant, Tyler; Singh, Rajiv R P

    2007-06-01

    We discuss recently introduced numerical linked-cluster (NLC) algorithms that allow one to obtain temperature-dependent properties of quantum lattice models, in the thermodynamic limit, from exact diagonalization of finite clusters. We present studies of thermodynamic observables for spin models on square, triangular, and kagomé lattices. Results for several choices of clusters and extrapolations methods, that accelerate the convergence of NLCs, are presented. We also include a comparison of NLC results with those obtained from exact analytical expressions (where available), high-temperature expansions (HTE), exact diagonalization (ED) of finite periodic systems, and quantum Monte Carlo simulations. For many models and properties NLC results are substantially more accurate than HTE and ED.

  10. Numerical algorithms for highly oscillatory dynamic system based on commutator-free method

    NASA Astrophysics Data System (ADS)

    Li, Wencheng; Deng, Zichen; Zhang, Suying

    2007-04-01

    In the present paper, an efficiently improved modified Magnus integrator algorithm based on commutator-free method is proposed for the second-order dynamic systems with time-dependent high frequencies. Firstly, the second-order dynamic systems are transferred to the frame of reference by introducing new variable so that highly oscillatory behaviour inherited from the entries. Then the modified Magnus integrator method based on local linearization is appropriately designed for solving the above new form. And some optimized strategies for reducing the number of function evaluations and matrix operations are also suggested. Finally, several numerical examples for highly oscillatory dynamic systems, such as Airy equation, Bessel equation, Mathieu equation, are presented to demonstrate the validity and effectiveness of the proposed method.

  11. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  12. International Symposium on Computational Electronics—Physical Modeling, Mathematical Theory, and Numerical Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yiming

    2007-12-01

    This symposium is an open forum for discussion on the current trends and future directions of physical modeling, mathematical theory, and numerical algorithm in electrical and electronic engineering. The goal is for computational scientists and engineers, computer scientists, applied mathematicians, physicists, and researchers to present their recent advances and exchange experience. We welcome contributions from researchers of academia and industry. All papers to be presented in this symposium have carefully been reviewed and selected. They include semiconductor devices, circuit theory, statistical signal processing, design optimization, network design, intelligent transportation system, and wireless communication. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!

  13. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    NASA Astrophysics Data System (ADS)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate

  14. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    SciTech Connect

    Dong, S.

    2015-02-15

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  15. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  16. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  17. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  18. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  19. Deconvolution of complex spectra into components by the bee swarm algorithm

    NASA Astrophysics Data System (ADS)

    Yagfarov, R. R.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.; Salakhov, M. Kh

    2016-05-01

    The bee swarm algorithm is adapted for the solution of the problem of deconvolution of complex spectral contours into components. Comparison of biological concepts relating to the behaviour of bees in a colony and mathematical concepts relating to the quality of the obtained solutions is carried out (mean square error, random solutions in the each iteration). Model experiments, which have been realized on the example of a signal representing a sum of three Lorentz contours of various intensity and half-width, confirm the efficiency of the offered approach.

  20. An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys

    SciTech Connect

    Becker, R; Stolken, J; Jannetti, C; Bassani, J

    2003-10-16

    Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numerical simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.

  1. Numerical predictions and measurements in the lubrication of aeronautical engine and transmission components

    NASA Astrophysics Data System (ADS)

    Moraru, Laurentiu Eugen

    2005-11-01

    This dissertation treats a variety of aspects of the lubrication of mechanical components encountered in aeronautical engines and transmissions. The study covers dual clearance squeeze film dampers, mixed elastohydrodynamic lubrication (EHL) cases and thermal elastohydrodynamic contacts. The dual clearance squeeze film damper (SFD) invented by Fleming is investigated both theoretically and experimentally for cases when the sleeve that separates the two oil films is free to float and for cases when the separating sleeve is supported by a squirrel cage. The Reynolds equation is developed to handle each of these cases and it is solved analytically for short bearings. A rotordynamic model of a test rig is developed, for both the single and dual SFD cases. A computer code is written to calculate the motion of the test rig rotor. Experiments are performed in order to validate the theoretical results. Rotordynamics computations are found to favorably agree with measured data. A probabilistic model for mixed EHL is developed and implemented. Surface roughness of gears are measured and processed. The mixed EHL model incorporates the average flow model of Patir and Cheng and the elasto-plastic contact mechanics model of Chang Etsion and Bogy. The current algorithm allows for the computation of the load supported by an oil film and for the load supported by the elasto-plastically deformed asperities. This work also presents a way to incorporate the effect of the fluid induced roughness deformation by utilizing the "amplitude reduction" results provided by the deterministic analyses. The Lobatto point Gaussian integration algorithm of Elrod and Brewe was extended for thermal lubrication problems involving compressible lubricants and it was implemented in thermal elastohydrodynamic cases. The unknown variables across the film are written in series of Legendre polynomials. The thermal Reynolds equation is obtained in terms of the series coefficients and it is proven that it can

  2. Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    NASA Astrophysics Data System (ADS)

    Kitaura, F. S.; Enßlin, T. A.

    2008-09-01

    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.

  3. Numerical study of a finite volume scheme for incompressible Navier-Stokes equations based on SIMPLE-family algorithms

    NASA Astrophysics Data System (ADS)

    Alahyane, M.; Hakim, A.; Raghay, S.

    2017-01-01

    In this work, we present a numerical study of a finite volume scheme based on SIMPLE algorithm for incompressible Navier-Stokes problem. However, this algorithm still not applicable to a large category of problems this could be understood from its stability and convergence, which depends strongly on the parameter of relaxation, in some cases this algorithm could have an unexpected behavior. Therefore, in our work we focus on this particular point to overcome this respected choice of relaxation parameter and to find a sufficient condition for the convergence of the algorithm in general cases. This will be followed by numerical applications in image processing variety of fluid flow problems described by incompressible Navier-Stokes equations.

  4. A robust principal component analysis algorithm for EEG-based vigilance estimation.

    PubMed

    Shi, Li-Chen; Duan, Ruo-Nan; Lu, Bao-Liang

    2013-01-01

    Feature dimensionality reduction methods with robustness have a great significance for making better use of EEG data, since EEG features are usually high-dimensional and contain a lot of noise. In this paper, a robust principal component analysis (PCA) algorithm is introduced to reduce the dimension of EEG features for vigilance estimation. The performance is compared with that of standard PCA, L1-norm PCA, sparse PCA, and robust PCA in feature dimension reduction on an EEG data set of twenty-three subjects. To evaluate the performance of these algorithms, smoothed differential entropy features are used as the vigilance related EEG features. Experimental results demonstrate that the robustness and performance of robust PCA are better than other algorithms for both off-line and on-line vigilance estimation. The average RMSE (root mean square errors) of vigilance estimation was 0.158 when robust PCA was applied to reduce the dimensionality of features, while the average RMSE was 0.172 when standard PCA was used in the same task.

  5. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  6. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    PubMed

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes.

  7. Towards the optimal design of an uncemented acetabular component using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Ghosh, Rajesh; Pratihar, Dilip Kumar; Gupta, Sanjay

    2015-12-01

    Aseptic loosening of the acetabular component (hemispherical socket of the pelvic bone) has been mainly attributed to bone resorption and excessive generation of wear particle debris. The aim of this study was to determine optimal design parameters for the acetabular component that would minimize bone resorption and volumetric wear. Three-dimensional finite element models of intact and implanted pelvises were developed using data from computed tomography scans. A multi-objective optimization problem was formulated and solved using a genetic algorithm. A combination of suitable implant material and corresponding set of optimal thicknesses of the component was obtained from the Pareto-optimal front of solutions. The ultra-high-molecular-weight polyethylene (UHMWPE) component generated considerably greater volumetric wear but lower bone density loss compared to carbon-fibre reinforced polyetheretherketone (CFR-PEEK) and ceramic. CFR-PEEK was located in the range between ceramic and UHMWPE. Although ceramic appeared to be a viable alternative to cobalt-chromium-molybdenum alloy, CFR-PEEK seems to be the most promising alternative material.

  8. A new blind fault component separation algorithm for a single-channel mechanical signal mixture

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2012-10-01

    A vibration signal collected from a complex machine consists of multiple vibration components, which are system responses excited by several sources. This paper reports a new blind component separation (BCS) method for extracting different mechanical fault features. By applying the proposed method, a single-channel mixed signal can be decomposed into two parts: the periodic and transient subsets. The periodic subset is related to the imbalance, misalignment and eccentricity of a machine. The transient subset refers to abnormal impulsive phenomena, such as those caused by localized bearing faults. The proposed method includes two individual strategies to deal with these different characteristics. The first extracts the sub-Gaussian periodic signal by minimizing the kurtosis of the equalized signals. The second detects the super-Gaussian transient signal by minimizing the smoothness index of the equalized signals. Here, the equalized signals are derived by an eigenvector algorithm that is a successful solution to the blind equalization problem. To reduce the computing time needed to select the equalizer length, a simple optimization method is introduced to minimize the kurtosis and smoothness index, respectively. Finally, simulated multiple-fault signals and a real multiple-fault signal collected from an industrial machine are used to validate the proposed method. The results show that the proposed method is able to effectively decompose the multiple-fault vibration mixture into periodic components and random non-stationary transient components. In addition, the equalizer length can be intelligently determined using the proposed method.

  9. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  10. Numerical algorithms based on Galerkin methods for the modeling of reactive interfaces in photoelectrochemical (PEC) solar cells

    NASA Astrophysics Data System (ADS)

    Harmon, Michael; Gamba, Irene M.; Ren, Kui

    2016-12-01

    This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.

  11. A modeling and numerical algorithm for thermoporomechanics in multiple porosity media for naturally fractured reservoirs

    NASA Astrophysics Data System (ADS)

    Kim, J.; Sonnenthal, E. L.; Rutqvist, J.

    2011-12-01

    Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator

  12. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  13. Essential Oil of Artemisia annua L.: An Extraordinary Component with Numerous Antimicrobial Properties

    PubMed Central

    Bilia, Anna Rita; Sacco, Cristiana; Bergonzi, Maria Camilla; Donato, Rosa

    2014-01-01

    Artemisia annua L. (Asteraceae) is native to China, now naturalised in many other countries, well known as the source of the unique sesquiterpene endoperoxide lactone artemisinin, and used in the treatment of the chloroquine-resistant and cerebral malaria. The essential oil is rich in mono- and sesquiterpenes and represents a by-product with medicinal properties. Besides significant variations in its percentage and composition have been reported (major constituents can be camphor (up to 48%), germacrene D (up to 18.9%), artemisia ketone (up to 68%), and 1,8 cineole (up to 51.5%)), the oil has been subjected to numerous studies supporting exciting antibacterial and antifungal activities. Both gram-positive bacteria (Enterococcus, Streptococcus, Staphylococcus, Bacillus, and Listeria spp.), and gram-negative bacteria (Escherichia, Shigella, Salmonella, Haemophilus, Klebsiella, and Pseudomonas spp.) and other microorganisms (Candida, Saccharomyces, and Aspergillus spp.) have been investigated. However, the experimental studies performed to date used different methods and diverse microorganisms; as a consequence, a comparative analysis on a quantitative basis is very difficult. The aim of this review is to sum up data on antimicrobial activity of A. annua essential oil and its major components to facilitate future approach of microbiological studies in this field. PMID:24799936

  14. A flexible numerical component to simulate surface runoff transport and biogeochemical processes through dense vegetation

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Perez-Ovilla, O.

    2012-12-01

    Methods to estimate surface runoff pollutant removal using dense vegetation buffers (i.e. vegetative filter strips) usually consider a limited number of factors (i.e. filter length, slope) and are in general based on empirical relationships. When an empirical approach is used, the application of the model is limited to those conditions of the data used for the regression equations. The objective of this work is to provide a flexible numerical mechanistic tool to simulate dynamics of a wide range of surface runoff pollutants through dense vegetation and their physical, chemical and biological interactions based on equations defined by the user as part of the model inputs. A flexible water quality model based on the Reaction Simulation Engine (RSE) modeling component is coupled to a transport module based on the traditional Bubnov -Galerkin finite element method to solve the advection-dispersion-reaction equation using the alternating split-operator technique. This coupled transport-reaction model is linked to the VFSMOD-W (http://abe.ufl.edu/carpena/vfsmod) program to mechanistically simulate mobile and stabile pollutants through dense vegetation based on user-defined conceptual models (differential equations written in XML language as input files). The key factors to consider in the creation of a conceptual model are the components in the buffer (i.e. vegetation, soil, sediments) and how the pollutant interacts with them. The biogeochemical reaction component was tested successfully with laboratory and field scale experiments. One of the major advantages when using this tool is that the pollutant transport and removal thought dense vegetation is related to physical and biogeochemical process occurring within the filter. This mechanistic approach increases the range of use of the model to a wide range of pollutants and conditions without modification of the core model. The strength of the model relies on the mechanistic approach used for simulating the removal of

  15. CCARES: A computer algorithm for the reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Gyekenyesi, John P.

    1993-01-01

    Structural components produced from laminated CMC (ceramic matrix composite) materials are being considered for a broad range of aerospace applications that include various structural components for the national aerospace plane, the space shuttle main engine, and advanced gas turbines. Specifically, these applications include segmented engine liners, small missile engine turbine rotors, and exhaust nozzles. Use of these materials allows for improvements in fuel efficiency due to increased engine temperatures and pressures, which in turn generate more power and thrust. Furthermore, this class of materials offers significant potential for raising the thrust-to-weight ratio of gas turbine engines by tailoring directions of high specific reliability. The emerging composite systems, particularly those with silicon nitride or silicon carbide matrix, can compete with metals in many demanding applications. Laminated CMC prototypes have already demonstrated functional capabilities at temperatures approaching 1400 C, which is well beyond the operational limits of most metallic materials. Laminated CMC material systems have several mechanical characteristics which must be carefully considered in the design process. Test bed software programs are needed that incorporate stochastic design concepts that are user friendly, computationally efficient, and have flexible architectures that readily incorporate changes in design philosophy. The CCARES (Composite Ceramics Analysis and Reliability Evaluation of Structures) program is representative of an effort to fill this need. CCARES is a public domain computer algorithm, coupled to a general purpose finite element program, which predicts the fast fracture reliability of a structural component under multiaxial loading conditions.

  16. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  17. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Astrophysics Data System (ADS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  18. Cell light scattering characteristic numerical simulation research based on FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong

    2017-01-01

    In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.

  19. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    ERIC Educational Resources Information Center

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  20. Diagnosis of atherosclerosis in human carotid artery by FT-Raman spectroscopy: Principal Components Analysis algorithm

    NASA Astrophysics Data System (ADS)

    Nogueira, Grazielle V.; Silveira, Landulfo, Jr.; Martin, Airton A.; Zangaro, Renato A.; Pacheco, Marcos T.; Chavantes, Maria C.; Zampieri, Marcelo; Pasqualucci, Carlos A. G.

    2004-07-01

    FT- Raman Spectroscopy (FT-Raman) could allow identification and evaluation of human atherosclerotic lesions. A Raman spectrum can provide biochemical information of arteries which can help identifying the disease status and evolution. In this study, it is shown the results of FT-Raman for identification of human carotid arteries in vitro. Fragments of human carotid arteries were analyzed using a FT-Raman spectrometer with a Nd:YAG laser at 1064nm operating at an excitation power of 300mW. Spectra were obtained with 250 scans and spectral resolution of 4 cm-1. Each collection time was approximately 8 min. A total of 75 carotid fragments were spectroscopically scanned and FT-Raman results were compared with histopathology. Principal Components Analysis (PCA) was used to model an algorithm for tissue classification into three categories: normal, atherosclerotic plaque without calcification and atherosclerotic plaque with calcification. Non-atherosclerotic (normal) artery, atherosclerotic plaque and calcified plaque exhibit different spectral signatures related to biochemicals presented in each tissue type, such as, bands of collagen and elastin (proteins), cholesterol and its esters and calcium hydroxyapatite and carbonate apatite respectively. Results show that there is 96% match between classifications based on PCA algorithm and histopathology. The diagnostic applied over all 75 samples had sensitivity and specificity of about 89% and 100%, respectively, for atherosclerotic plaque and 100% and 98% for calcified plaque.

  1. Study of Groundwater Resources Components in the North China Plain based on Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Shao, J.

    2015-12-01

    Over-exploitation of groundwater and induced environmental problems in the North China Plain (NCP) has drawn more and more concerns. Here, we chose three typical hydrogeological units in the NCP, which are Hutuo River alluvial fan (HR), the Tianjin Plain in the central alluvial fan (TJ), and the Yellow river aquifer system (YR). Relying on groundwater numerical models through MODFLOW, the water balances were calculated and analyzed accordingly, especially for quantifying individual recharge and discharge contributing terms. Specifically, (1) In the HR, both natural steady-state flow and transient flow models under human activities were implemented. Results indicated groundwater level decreased by around 40m with extensive exploitation, where the total recharge rate, discharge rate, and over-exploitation rate were calculated. (2) In the TJ, groundwater and land subsidence coupled model was established, where the maximum subsidence rate and decrease of groundwater level was estimated. (3) In the YR, the exploitation rate of the groundwater and recharge rate of the aquifer by the Yellow River were calculated. We found that there are big differences among the components of groundwater recharge of the three typical hydrogeological units. Human activities have a clear effect on the recharge and discharge processes. Thus, rational development and protection policies should be issued. In the piedmont alluvial fan, the groundwater was severely over-exploited. Therefore, reduction of groundwater exploitation and groundwater artificial recharge are needed to get the recharge and discharge balanced. In the middle alluvial fan of the NCP, the confined aquifer has been over-exploited and has resulted in regional land subsidence. It suggests the withdrawal of confined aquifer should be strictly limited, especially at the place where alternative water resources are accessible. In the hydrogeological unit of the YR, the groundwater storage is potentially large for exploitation.

  2. An Adaptive Numeric Predictor-corrector Guidance Algorithm for Atmospheric Entry Vehicles. M.S. Thesis - MIT, Cambridge

    NASA Technical Reports Server (NTRS)

    Spratlin, Kenneth Milton

    1987-01-01

    An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

  3. A novel algorithm and its VLSI architecture for connected component labeling

    NASA Astrophysics Data System (ADS)

    Zhao, Hualong; Sang, Hongshi; Zhang, Tianxu

    2011-11-01

    A novel line-based streaming labeling algorithm with its VLSI architecture is proposed in this paper. Line-based neighborhood examination scheme is used for efficient local connected components extraction. A novel reversed rooted tree hook-up strategy, which is very suitable for hardware implementation, is applied on the mergence stage of equivalent connected components. The reversed rooted tree hook-up strategy significant reduces the requirement of on-chip memory, which makes the chip area smaller. Clock domains crossing FIFOs are also applied for connecting the label core and external memory interface, which makes the label engine working in a higher frequency and raises the throughput of the label engine. Several performance tests have been performed for our proposed hardware implementation. The processing bandwidth of our hardware architecture can reach the I/O transfer boundary according to the external interface clock in all the real image tests. Beside the advantage of reducing the processing time, our hardware implementation can support the image size as large as 4096*4096, which will be very appealing in remote sensing or any other high-resolution image applications. The implementation of proposed architecture is synthesized with SMIC 180nm standard cell library. The work frequency of the label engine reaches 200MHz.

  4. Numerical Roll Reversal Predictor Corrector Aerocapture and Precision Landing Guidance Algorithms for the Mars Surveyor Program 2001 Missions

    NASA Technical Reports Server (NTRS)

    Powell, Richard W.

    1998-01-01

    This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.

  5. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm

    PubMed Central

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-01-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset. PMID:25426433

  6. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm.

    PubMed

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-10-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset.

  7. A Bayesian Approach to Estimating Coupling Between Neural Components: Evaluation of the Multiple Component, Event-Related Potential (mcERP) Algorithm

    NASA Technical Reports Server (NTRS)

    Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.

  8. A Bayesian Prognostic Algorithm for Assessing Remaining Useful Life of Nuclear Power Components

    SciTech Connect

    Ramuhalli, Pradeep; Bond, Leonard J.; Griffin, Jeffrey W.; Dixit, Mukul; Henager, Charles H.

    2010-12-01

    A central issue in life extension for the current fleet of light water nuclear power reactors is the early detection and monitoring of significant materials degradation. To meet this need nondestructive measurement methods that are suitable for on-line, continuous, in-plant monitoring over extended time periods (months to years) are needed. A related issue is then, based on a condition assessment or degradation trend, to have the ability to estimate the remaining useful life based of components, structures and systems based on the available materials degradation information. Such measurement and modeling methods form the basis for a new range of advanced diagnostic and prognostic approaches. Prognostic methods that predict remaining life based on large crack growth, and phenomena that can be described by linear elastic fracture mechanics, have been reported by several researchers. The challenge of predicting remaining life for earlier phases of degradation is largely unsolved. Monitoring for early detection of materials degradation requires novel and enhanced sensors and data integration techniques. A recent review has considered the stages of degradation and sensing methods that can potentially be employed to detect and monitor early degradation for nuclear power plant applications. An experimental assessment of selected diagnostic techniques was also reported recently. However, the estimation of remaining useful life (RUL) determined from nondestructive diagnostic measurements for early degradation is still an unsolved problem. This present paper will discuss the application of Bayesian prognostic algorithms applied to the early degradation- life problem.

  9. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast.

    PubMed

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M

    2016-04-21

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  10. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast

    NASA Astrophysics Data System (ADS)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.

    2016-04-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  11. Application of two oriented partial differential equation filtering models on speckle fringes with poor quality and their numerically fast algorithms.

    PubMed

    Zhu, Xinjun; Chen, Zhanqing; Tang, Chen; Mi, Qinghua; Yan, Xiusheng

    2013-03-20

    In this paper, we are concerned with denoising in experimentally obtained electronic speckle pattern interferometry (ESPI) speckle fringe patterns with poor quality. We extend the application of two existing oriented partial differential equation (PDE) filters, including the second-order single oriented PDE filter and the double oriented PDE filter, to two experimentally obtained ESPI speckle fringe patterns with very poor quality, and compare them with other efficient filtering methods, including the adaptive weighted filter, the improved nonlinear complex diffusion PDE, and the windowed Fourier transform method. All of the five filters have been illustrated to be efficient denoising methods through previous comparative analyses in published papers. The experimental results have demonstrated that the two oriented PDE models are applicable to low-quality ESPI speckle fringe patterns. Then for solving the main shortcoming of the two oriented PDE models, we develop the numerically fast algorithms based on Gauss-Seidel strategy for the two oriented PDE models. The proposed numerical algorithms are capable of accelerating the convergence greatly, and perform significantly better in terms of computational efficiency. Our numerically fast algorithms are extended automatically to some other PDE filtering models.

  12. Complex Demodulation in Monitoring Earth Rotation by VLBI: Testing the Algorithm by Analysis of Long Periodic EOP Components

    NASA Astrophysics Data System (ADS)

    Wielgosz, A.; Brzeziński, A.; Böhm, S.

    2016-12-01

    The complex demodulation (CD) algorithm is an efficient tool for extracting the diurnal and subdiurnal components of Earth rotation from the routine VLBI observations (Brzeziński, 2012). This algorithm was implemented by Böhm et al (2012b) into a dedicated version of the VLBI analysis software VieVs. The authors processed around 3700 geodetic 24-hour observing sessions in 1984.0-2010.5 and estimated simultaneously the time series of the long period components as well as diurnal, semidiurnal, terdiurnal and quarterdiurnal components of polar motion (PM) and universal time UT1. This paper describes the tests of the CD algorithm by checking consistency of the low frequency components of PM and UT1 estimated by VieVS CD and those from the IERS and IVS combined solutions. Moreover, the retrograde diurnal component of PM demodulated from VLBI observations has been compared to the celestial pole offsets series included in the IERS and IVS solutions. We found for all three components a good agreement of the results based on the CD approach and those based on the standard parameterization recommended by the IERS Conventions (IERS, 2010) and applied by the IERS and IVS. We conclude that an application of the CD parameterization in VLBI data analysis does not change those components of EOP which are included in the standard adjustment, while enabling simultaneous estimation of the high frequency components from the routine VLBI observations. Moreover, we deem that the CD algorithm can also be implemented in analysis of other space geodetic observations, like GNSS or SLR, enabling retrieval of subdiurnal signals in EOP from the past data.

  13. Numerical simulation of two-dimensional heat transfer in composite bodies with application to de-icing of aircraft components

    NASA Astrophysics Data System (ADS)

    Chao, D. F. K.

    1983-11-01

    Transient, numerical simulations of the de-icing of composite aircraft components by electrothermal heating were performed for a two dimensional rectangular geometry. The implicit Crank-Nicolson formulation was used to insure stability of the finite-difference heat conduction equations and the phase change in the ice layer was simulated using the Enthalpy method. The Gauss-Seidel point iterative method was used to solve the system of difference equations. Numerical solutions illustrating de-icer performance for various composite aircraft structures and environmental conditions are presented. Comparisons are made with previous studies. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.

  14. Numerical algorithms for estimation and calculation of parameters in modeling pest population dynamics and evolution of resistance.

    PubMed

    Shi, Mingren; Renton, Michael

    2011-10-01

    Computational simulation models can provide a way of understanding and predicting insect population dynamics and evolution of resistance, but the usefulness of such models depends on generating or estimating the values of key parameters. In this paper, we describe four numerical algorithms generating or estimating key parameters for simulating four different processes within such models. First, we describe a novel method to generate an offspring genotype table for one- or two-locus genetic models for simulating evolution of resistance, and how this method can be extended to create offspring genotype tables for models with more than two loci. Second, we describe how we use a generalized inverse matrix to find a least-squares solution to an over-determined linear system for estimation of parameters in probit models of kill rates. This algorithm can also be used for the estimation of parameters of Freundlich adsorption isotherms. Third, we describe a simple algorithm to randomly select initial frequencies of genotypes either without any special constraints or with some pre-selected frequencies. Also we give a simple method to calculate the "stable" Hardy-Weinberg equilibrium proportions that would result from these initial frequencies. Fourth we describe how the problem of estimating the intrinsic rate of natural increase of a population can be converted to a root-finding problem and how the bisection algorithm can then be used to find the rate. We implemented all these algorithms using MATLAB and Python code; the key statements in both codes consist of only a few commands and are given in the appendices. The results of numerical experiments are also provided to demonstrate that our algorithms are valid and efficient.

  15. Novel materials, fabrication techniques and algorithms for microwave and THz components, systems and applications

    NASA Astrophysics Data System (ADS)

    Liang, Min

    This dissertation presents the investigation of several additive manufactured components in RF and THz frequency, as well as the applications of gradient index lens based direction of arrival (DOA) estimation system and broadband electronically beam scanning system. Also, a polymer matrix composite method to achieve artificially controlled effective dielectric properties for 3D printing material is studied. Moreover, the characterization of carbon based nano-materials at microwave and THz frequency, photoconductive antenna array based Terahertz time-domain spectroscopy (THz-TDS) near field imaging system, and a compressive sensing based microwave imaging system is discussed in this dissertation. First, the design, fabrication and characterization of several 3D printed components in microwave and THz frequency are presented. These components include 3D printed broadband Luneburg lens, 3D printed patch antenna, 3D printed multilayer microstrip line structure with vertical transition, THz all-dielectric EMXT waveguide to planar microstrip transition structure and 3D printed dielectric reflectarrays. Second, the additive manufactured 3D Luneburg Lens is employed for DOA estimation application. Using the special property of a Luneburg lens that every point on the surface of the Lens is the focal point of a plane wave incident from the opposite side, 36 detectors are mounted around the surface of the lens to estimate the direction of arrival (DOA) of a microwave signal. The direction finding results using a correlation algorithm show that the averaged error is smaller than 1º for all 360 degree incident angles. Third, a novel broadband electronic scanning system based on Luneburg lens phased array structure is reported. The radiation elements of the phased array are mounted around the surface of a Luneburg lens. By controlling the phase and amplitude of only a few adjacent elements, electronic beam scanning with various radiation patterns can be easily achieved

  16. Selecting among three-mode principal component models of different types and complexities: a numerical convex hull based method.

    PubMed

    Ceulemans, Eva; Kiers, Henk A L

    2006-05-01

    Several three-mode principal component models can be considered for the modelling of three-way, three-mode data, including the Candecomp/Parafac, Tucker3, Tucker2, and Tucker1 models. The following question then may be raised: given a specific data set, which of these models should be selected, and at what complexity (i.e. with how many components)? We address this question by proposing a numerical model selection heuristic based on a convex hull. Simulation results show that this heuristic performs almost perfectly, except for Tucker3 data arrays with at least one small mode and a relatively large amount of error.

  17. NOGAPS (Numerical Operational Global Atmospheric Prediction System) Verification Using Spectral Components.

    DTIC Science & Technology

    1983-03-01

    ticulazly large f lux across the inversion for stra-tas capped PBL’s. I significant modification tc the GCH Is the constrar.- sent of the PEI to remain...study of cparational numerical p==- dic -icn mcdals. F uture improvsemqnt- In NIOGAPS forecast skill, particularly in zhe medium range, i s dependen- :n

  18. Interim Progress Report on the Application of an Independent Components Analysis-based Spectral Unmixing Algorithm to Beowulf Computers

    USGS Publications Warehouse

    Lemeshewsky, George

    2003-01-01

    This report describes work done to implement an independent-components-analysis (ICA) -based blind unmixing algorithm on the Eastern Region Geography (ERG) Beowulf computer cluster. It gives a brief description of blind spectral unmixing using ICA-based techniques and a preliminary example of unmixing results for Landsat-7 Thematic Mapper multispectral imagery using a recently reported1,2,3 unmixing algorithm. Also included are computer performance data. The final phase of this work, the actual implementation of the unmixing algorithm on the Beowulf cluster, was not completed this fiscal year and is addressed elsewhere. It is noted that study of this algorithm and its application to land-cover mapping will continue under another research project in the Land Remote Sensing theme into fiscal year 2004.

  19. Numerical simulation study of the dynamical behavior of the Niedermayer algorithm

    NASA Astrophysics Data System (ADS)

    Girardi, D.; Branco, N. S.

    2010-04-01

    We calculate the dynamic critical exponent for the Niedermayer algorithm applied to the two-dimensional Ising and XY models, for various values of the free parameter E0. For E0 = - 1 we regain the Metropolis algorithm and for E0 = 1 we regain the Wolff algorithm. For - 1 < E0 < 1, we show that the mean size of the clusters of (possibly) turned spins initially grows with the linear size of the lattice, L, but eventually saturates at a given lattice size \\widetilde {L} , which depends on E0. For L\\gt \\widetilde {L} , the Niedermayer algorithm is equivalent to the Metropolis one, i.e., they have the same dynamic exponent. For E0 > 1, the autocorrelation time is always greater than for E0 = 1 (Wolff) and, more important, it also grows faster than a power of L. Therefore, we show that the best choice of cluster algorithm is the Wolff one, when comparing against the Niedermayer generalization. We also obtain the dynamic behavior of the Wolff algorithm: although not conclusively, we propose a scaling law for the dependence of the autocorrelation time on L.

  20. Essentially entangled component of multipartite mixed quantum states, its properties, and an efficient algorithm for its extraction

    NASA Astrophysics Data System (ADS)

    Akulin, V. M.; Kabatiansky, G. A.; Mandilara, A.

    2015-10-01

    Using geometric means, we first consider a density matrix decomposition of a multipartite quantum system of a finite dimension into two density matrices: a separable one, also known as the best separable approximation, and an essentially entangled one, which contains no product state components. We show that this convex decomposition can be achieved in practice with the help of a linear programming algorithm that scales in the general case polynomially with the system dimension. We illustrate the algorithm implementation with an example of a composite system of dimension 12 that undergoes a loss of coherence due to classical noise and we trace the time evolution of its essentially entangled component. We suggest a "geometric" description of entanglement dynamics and demonstrate how it explains the well-known phenomena of sudden death and revival of multipartite entanglements. For a statistical weight loss of the essentially entangled component with time, its average entanglement content is not affected by the coherence loss.

  1. Numerical model a graphene component for the sensing of weak electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Nasswettrova, A.; Fiala, P.; Nešpor, D.; Drexler, P.; Steinbauer, M.

    2015-05-01

    The paper discusses a numerical model and provides an analysis of a graphene coaxial line suitable for sub-micron sensors of magnetic fields. In relation to the presented concept, the target areas and disciplines include biology, medicine, prosthetics, and microscopic solutions for modern actuators or SMART elements. The proposed numerical model is based on an analysis of a periodic structure with high repeatability, and it exploits a graphene polymer having a basic dimension in nanometers. The model simulates the actual random motion in the structure as the source of spurious signals and considers the pulse propagation along the structure; furthermore, the model also examines whether and how the pulse will be distorted at the beginning of the line, given the various ending versions. The results of the analysis are necessary for further use of the designed sensing devices based on graphene structures.

  2. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees

    PubMed Central

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-01-01

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597

  3. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees.

    PubMed

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-09-18

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods.

  4. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light.

    PubMed

    Bor, E; Turduev, M; Kurt, H

    2016-08-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

  5. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    PubMed Central

    Bor, E.; Turduev, M.; Kurt, H.

    2016-01-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060

  6. Differential evolution algorithm based photonic structure design: numerical and experimental verification of subwavelength λ/5 focusing of light

    NASA Astrophysics Data System (ADS)

    Bor, E.; Turduev, M.; Kurt, H.

    2016-08-01

    Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.

  7. Numerical Modeling for Hole-Edge Cracking of Advanced High-Strength Steels (AHSS) Components in the Static Bend Test

    NASA Astrophysics Data System (ADS)

    Kim, Hyunok; Mohr, William; Yang, Yu-Ping; Zelenak, Paul; Kimchi, Menachem

    2011-08-01

    Numerical modeling of local formability, such as hole-edge cracking and shear fracture in bending of AHSS, is one of the challenging issues for simulation engineers for prediction and evaluation of stamping and crash performance of materials. This is because continuum-mechanics-based finite element method (FEM) modeling requires additional input data, "failure criteria" to predict the local formability limit of materials, in addition to the material flow stress data input for simulation. This paper presents a numerical modeling approach for predicting hole-edge failures during static bend tests of AHSS structures. A local-strain-based failure criterion and a stress-triaxiality-based failure criterion were developed and implemented in LS-DYNA simulation code to predict hole-edge failures in component bend tests. The holes were prepared using two different methods: mechanical punching and water-jet cutting. In the component bend tests, the water-jet trimmed hole showed delayed fracture at the hole-edges, while the mechanical punched hole showed early fracture as the bending angle increased. In comparing the numerical modeling and test results, the load-displacement curve, the displacement at the onset of cracking, and the final crack shape/length were used. Both failure criteria also enable the numerical model to differentiate between the local formability limit of mechanical-punched and water-jet-trimmed holes. The failure criteria and static bend test developed here are useful to evaluate the local formability limit at a structural component level for automotive crash tests.

  8. Transient Numerical Modeling of the Combustion of Bi-Component Liquid Droplets: Methanol/Water Mixture

    NASA Technical Reports Server (NTRS)

    Marchese, A. J.; Dryer, F. L.

    1994-01-01

    This study shows that liquid mixtures of methanol and water are attractive candidates for microgravity droplet combustion experiments and associated numerical modeling. The gas phase chemistry for these droplet mixtures is conceptually simple, well understood and substantially validated. In addition, the thermodynamic and transport properties of the liquid mixture have also been well characterized. Furthermore, the results obtained in this study predict that the extinction of these droplets may be observable in ground-based drop to tower experiments. Such experiments will be conducted shortly followed by space-based experiments utilizing the NASA FSDC and DCE experiments.

  9. Impact of multi-component diffusion in turbulent combustion using direct numerical simulations

    DOE PAGES

    Bruno, Claudio; Sankaran, Vaidyanathan; Kolla, Hemanth; ...

    2015-08-28

    This study presents the results of DNS of a partially premixed turbulent syngas/air flame at atmospheric pressure. The objective was to assess the importance and possible effects of molecular transport on flame behavior and structure. To this purpose DNS were performed at with two proprietary DNS codes and with three different molecular diffusion transport models: fully multi-component, mixture averaged, and imposing the Lewis number of all species to be unity.

  10. Numerical modeling evapotranspiration flux components in shrub-encroached grassland in Inner Mongolia, China

    NASA Astrophysics Data System (ADS)

    Wang, Pei; Li, Xiao-Yan; Huang, Jie-Yu; Yang, Wen-Xin; Wang, Qi-Dan; Xu, Kun; Zheng, Xiao-Ran

    2016-04-01

    Shrub encroachment into arid grasslands occurs around the world. However, few works on shrub encroachment has been conducted in China. Moreover, its hydrological implications remain poorly investigated in arid and semiarid regions. This study combined a two-source energy balanced model and Newton-Raphson iteration scheme to simulate the evapotranspiration (ET) and their components of shrub-encroached(with 15.4% shrub coverage) grassland in Inner Mongolia. Good agreements of ET flux between modelled and measured by Bowen ratio method with relatively insensitive to uncertainties/errors in the assigned models parameters or in measured input variables for its components illustrated that our model was feasible for simulating evapotranspiration flux components in shrub-encroached grassland. The transpiration fraction(T /ET)account for 58±17%during the growing season. With the designed shrub encroachment extreme scenarios (maximum and minimum coverage),the contribution of shrub to local plant transpiration (Tshrub/T) was 20.06±7%during the growing season. Canopy conductance was the main controlling factor of T /ET. In diurnal scale short wave solar radiation was the direct influential factor while in seasonal scale leaf area index (LAI) and soil water content were the direct influential factors. We find that the seasonal variation of Tshrub/T has a good relationship with ratio of LAIshrub/LAI, and rainfall characteristics widened the difference of contribution of shrub and herbs to ecosystem evapotranspiration.

  11. Artificial algae algorithm with multi-light source for numerical optimization and applications.

    PubMed

    Uymaz, Sait Ali; Tezel, Gulay; Yel, Esra

    2015-12-01

    Artificial algae algorithm (AAA), which is one of the recently developed bio-inspired optimization algorithms, has been introduced by inspiration from living behaviors of microalgae. In AAA, the modification of the algal colonies, i.e. exploration and exploitation is provided with a helical movement. In this study, AAA was modified by implementing multi-light source movement and artificial algae algorithm with multi-light source (AAAML) version was established. In this new version, we propose the selection of a different light source for each dimension that is modified with the helical movement for stronger balance between exploration and exploitation. These light sources have been selected by tournament method and each light source are different from each other. This gives different solutions in the search space. The best of these three light sources provides orientation to the better region of search space. Furthermore, the diversity in the source space is obtained with the worst light source. In addition, the other light source improves the balance. To indicate the performance of AAA with new proposed operators (AAAML), experiments were performed on two different sets. Firstly, the performance of AAA and AAAML was evaluated on the IEEE-CEC'13 benchmark set. The second set was real-world optimization problems used in the IEEE-CEC'11. To verify the effectiveness and efficiency of the proposed algorithm, the results were compared with other state-of-the-art hybrid and modified algorithms. Experimental results showed that the multi-light source movement (MLS) increases the success of the AAA.

  12. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  13. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  14. An efficient numerical algorithm for computing densely distributed positive interior transmission eigenvalues

    NASA Astrophysics Data System (ADS)

    Li, Tiexiang; Huang, Tsung-Ming; Lin, Wen-Wei; Wang, Jenn-Nan

    2017-03-01

    We propose an efficient eigensolver for computing densely distributed spectra of the two-dimensional transmission eigenvalue problem (TEP), which is derived from Maxwell’s equations with Tellegen media and the transverse magnetic mode. The governing equations, when discretized by the standard piecewise linear finite element method, give rise to a large-scale quadratic eigenvalue problem (QEP). Our numerical simulation shows that half of the positive eigenvalues of the QEP are densely distributed in some interval near the origin. The quadratic Jacobi–Davidson method with a so-called non-equivalence deflation technique is proposed to compute the dense spectrum of the QEP. Extensive numerical simulations show that our proposed method processes the convergence efficiently, even when it needs to compute more than 5000 desired eigenpairs. Numerical results also illustrate that the computed eigenvalue curves can be approximated by nonlinear functions, which can be applied to estimate the denseness of the eigenvalues for the TEP.

  15. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  16. Research on Numerical Algorithms for the Three Dimensional Navier-Stokes Equations. I. Accuracy, Convergence & Efficiency.

    DTIC Science & Technology

    1979-09-01

    ithm for Computational Fluid Dynamics," Ph.D. Dissertation, Univ. of Tennessee, Report ESM 78-1, 1978. 18. Thames, F. C., Thompson , J . F ., and Mastin...C. W., "Numerical Solution of the Navier-Stokes Equations for Arbitrary Two-Dimensional Air- foils," NASA SP-347, 1975. 19. Thompson , J . F ., Thames...Number of Arbitrary Two-Dimensional Bodies," NASA CR-2729, 1976. 20. Thames, F. C., Thompson , J . F ., Mastin, C. W., and Walker, R. L., "Numerical

  17. Theoretical and numerical investigation of diffusive instabilities in multi-component alloys

    NASA Astrophysics Data System (ADS)

    Lahiri, Arka; Choudhury, Abhik

    2017-02-01

    Diffusive instabilities of the Mullins-Sekerka type are one of the principal mechanisms through which microstructures form during solidification. In this study, we perform a linear stability analysis for the perturbation of a planar interface, where we derive analytical expressions to characterize the dispersion behavior in multi-component alloys under directional and isothermal solidification conditions. Subsequently, we confirm our calculations using phase-field simulations for different choices of the inter-diffusivity matrices. Thereafter, we highlight the characteristics of the dispersion curves upon change of the diffusivity matrix and the velocity. Finally, we also depict conditions for absolute stability of a planar interface under directional solidification conditions.

  18. Scanning of wind turbine upwind conditions: numerical algorithm and first applications

    NASA Astrophysics Data System (ADS)

    Calaf, Marc; Cortina, Gerard; Sharma, Varun; Parlange, Marc B.

    2014-11-01

    Wind turbines still obtain in-situ meteorological information by means of traditional wind vane and cup anemometers installed at the turbine's nacelle, right behind the blades. This has two important drawbacks: 1-turbine misalignment with the mean wind direction is common and energy losses are experienced; 2-the near-blade monitoring does not provide any time to readjust the profile of the wind turbine to incoming turbulence gusts. A solution is to install wind Lidar devices on the turbine's nacelle. This technique is currently under development as an alternative to traditional in-situ wind anemometry because it can measure the wind vector at substantial distances upwind. However, at what upwind distance should they interrogate the atmosphere? A new flexible wind turbine algorithm for large eddy simulations of wind farms that allows answering this question, will be presented. The new wind turbine algorithm timely corrects the turbines' yaw misalignment with the changing wind. The upwind scanning flexibility of the algorithm also allows to track the wind vector and turbulent kinetic energy as they approach the wind turbine's rotor blades. Results will illustrate the spatiotemporal evolution of the wind vector and the turbulent kinetic energy as the incoming flow approaches the wind turbine under different atmospheric stability conditions. Results will also show that the available atmospheric wind power is larger during daytime periods at the cost of an increased variance.

  19. All-electron formalism for total energy strain derivatives and stress tensor components for numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Knuth, Franz; Carbogno, Christian; Atalla, Viktor; Blum, Volker; Scheffler, Matthias

    2015-05-01

    We derive and implement the strain derivatives of the total energy of solids, i.e., the analytic stress tensor components, in an all-electron, numeric atom-centered orbital based density-functional formalism. We account for contributions that arise in the semi-local approximation (LDA/GGA) as well as in the generalized Kohn-Sham case, in which a fraction of exact exchange (hybrid functionals) is included. In this work, we discuss the details of the implementation including the numerical corrections for sparse integrations grids which allow to produce accurate results. We validate the implementation for a variety of test cases by comparing to strain derivatives performed via finite differences. Additionally, we include the detailed definition of the overlapping atom-centered integration formalism used in this work to obtain total energies and their derivatives.

  20. Repair, Evaluation, Maintenance, and Rehabilitation Research Program: Explicit Numerical Algorithm for Modeling Incompressible Approach Flow

    DTIC Science & Technology

    1989-03-01

    by Colorado State University, Fort Collins, CO, for US Army Engineer Waterways Experiment Station, Vicksburg, MS. Thompson , J . F . 1983 (Mar). "A...Waterways Experiment Station, Vicksburg, MS. Thompson , J . F ., and Bernard, R. S. 1985 (Aug). "WESSEL: Code for Numerical Simulation of Two-Dimensional Time

  1. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  2. A component-level failure detection and identification algorithm based on open-loop and closed-loop state estimators

    NASA Astrophysics Data System (ADS)

    You, Seung-Han; Cho, Young Man; Hahn, Jin-Oh

    2013-04-01

    This study presents a component-level failure detection and identification (FDI) algorithm for a cascade mechanical system subsuming a plant driven by an actuator unit. The novelty of the FDI algorithm presented in this study is that it is able to discriminate failure occurring in the actuator unit, the sensor measuring the output of the actuator unit, and the plant driven by the actuator unit. The proposed FDI algorithm exploits the measurement of the actuator unit output together with its estimates generated by open-loop (OL) and closed-loop (CL) estimators to enable FDI at the component's level. In this study, the OL estimator is designed based on the system identification of the actuator unit. The CL estimator, which is guaranteed to be stable against variations in the plant, is synthesized based on the dynamics of the entire cascade system. The viability of the proposed algorithm is demonstrated using a hardware-in-the-loop simulation (HILS), which shows that it can detect and identify target failures reliably in the presence of plant uncertainties.

  3. Two-component few-fermion mixtures in a one-dimensional trap: Numerical versus analytical approach

    NASA Astrophysics Data System (ADS)

    Brouzos, Ioannis; Schmelcher, Peter

    2013-02-01

    We explore a few-fermion mixture consisting of two components that are repulsively interacting and confined in a one-dimensional harmonic trap. Different scenarios of population imbalance ranging from the completely imbalanced case where the physics of a single impurity in the Fermi sea is discussed to the partially imbalanced and equal population configurations are investigated. For the numerical calculations the multiconfigurational time-dependent Hartree method is employed, extending its application to few-fermion systems. Apart from numerical calculations we generalize our ansatz for a correlated pair wave function proposed recently [I. Brouzos and P. Schmelcher, Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.045301 108, 045301 (2012)] for bosons to mixtures of fermions. From weak to strong coupling between the components the energies, the densities and the correlation properties of one-dimensional systems change vastly with an upper limit set by fermionization where for infinite repulsion all fermions can be mapped to identical ones. The numerical and analytical treatments are in good agreement with respect to the description of this crossover. We show that for equal populations each pair of different component atoms splits into two single peaks in the density while for partial imbalance additional peaks and plateaus arise for very strong interaction strengths. The case of a single-impurity atom shows rich behavior of the energy and density as we approach fermionization and is directly connected to recent experiments [G. Zürn , Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.075303 108, 075303 (2012)].

  4. Numerical simulation of fluid-structure interaction on flexible PCB with multiple ball grid array components

    NASA Astrophysics Data System (ADS)

    Hooi, Lim Chong; Abdullah, Mohd. Zulkifly; Azid, Ishak Abdul

    2017-03-01

    The crave of flexibility and light weight in some electronic device triggers the replacement of rigid printed circuit board (RPCB) with flexible printed circuit board (FPCB). However, the deflection and von Mises stress of FPCB caused by air flow are far more critical compared to RPCB. In the present study, effect of various Reynolds numbers (Re) and quantity of BGA packages attached on the FPCB toward FPCB's deflection and von Mises stress are investigated. The numerical simulation was performed using FLUENT and ABAQUS, coupled online by Mesh-based Parallel Code Coupling Interface (MpCCI). The results show that the maximum deflection divided by characteristic length and von Mises stress occurs at maximum Re on Case E. Findings indicate both Re and quantity of BGA packages have major effect on the responses. However, the effect of Re is higher than the quantity of BGA packages attached on the FPCB. Thus, both factors should be considered when designing FPCB which is exposed to flow environment and extreme operating condition.

  5. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.

    2006-02-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.

  6. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  7. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  8. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  9. Properties of the numerical algorithms for problems of quantum information technologies: Benefits of deep analysis

    NASA Astrophysics Data System (ADS)

    Chernyavskiy, Andrey; Khamitov, Kamil; Teplov, Alexey; Voevodin, Vadim; Voevodin, Vladimir

    2016-10-01

    In recent years, quantum information technologies (QIT) showed great development, although, the way of the implementation of QIT faces the serious difficulties, some of which are challenging computational tasks. This work is devoted to the deep and broad analysis of the parallel algorithmic properties of such tasks. As an example we take one- and two-qubit transformations of a many-qubit quantum state, which are the most critical kernels of many important QIT applications. The analysis of the algorithms uses the methodology of the AlgoWiki project (algowiki-project.org) and consists of two parts: theoretical and experimental. Theoretical part includes features like sequential and parallel complexity, macro structure, and visual information graph. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia) and includes the analysis of locality and memory access, scalability and the set of more specific dynamic characteristics of realization. This approach allowed us to obtain bottlenecks and generate ideas of efficiency improvement.

  10. Implementation and testing of a real-time 3-component phase picking program for Earthworm using the CECM algorithm

    NASA Astrophysics Data System (ADS)

    Baker, B. I.; Friberg, P. A.

    2014-12-01

    Modern seismic networks typically deploy three component (3C) sensors, but still fail to utilize all of the information available in the seismograms when performing automated phase picking for real-time event location. In most cases a variation on a short term over long term average threshold detector is used for picking and then an association program is used to assign phase types to the picks. However, the 3C waveforms from an earthquake contain an abundance of information related to the P and S phases in both their polarization and energy partitioning. An approach that has been overlooked and has demonstrated encouraging results is the Component Energy Comparison Method (CECM) by Nagano et al. as published in Geophysics 1989. CECM is well suited to being used in real-time because the calculation is not computationally intensive. Furthermore, the CECM method has fewer tuning variables (3) than traditional pickers in Earthworm such as the Rex Allen algorithm (N=18) or even the Anthony Lomax Filter Picker module (N=5). In addition to computing the CECM detector we study the detector sensitivity by rotating the signal into principle components as well as estimating the P phase onset from a curvature function describing the CECM as opposed to the CECM itself. We present our results implementing this algorithm in a real-time module for Earthworm and show the improved phase picks as compared to the traditional single component pickers using Earthworm.

  11. Numerical analysis of a smart composite material mechanical component using an embedded long period grating fiber sensor

    NASA Astrophysics Data System (ADS)

    Savastru, Dan; Miclos, Sorin; Savastru, Roxana; Lancranjan, Ion I.

    2015-05-01

    Results obtained by FEM analysis of a smart mechanical part manufactured of reinforced composite materials with embedded long period grating fiber sensors (LPGFS) used for operation monitoring are presented. Fiber smart reinforced composite materials because of their fundamental importance across a broad range of industrial applications, as aerospace industry. The main purpose of the performed numerical analysis consists in final improved design of composite mechanical components providing a feedback useful for further automation of the whole system. The performed numerical analysis is pointing to a correlation of composite material internal mechanical loads applied to LPGFS with the NIR absorption bands peak wavelength shifts. One main idea of the performed numerical analysis relies on the observed fact that a LPGFS embedded inside a composite material undergoes mechanical loads created by the micro scale roughness of the composite fiber network. The effect of this mechanical load consists in bending of the LPGFS. The shifting towards IR and broadening of absorption bands appeared in the LPGFS transmission spectra is modeled according to this observation using the coupled mode approach.

  12. Algorithm for direct numerical simulation of emulsion flow through a granular material

    NASA Astrophysics Data System (ADS)

    Zinchenko, Alexander Z.; Davis, Robert H.

    2008-08-01

    A multipole-accelerated 3D boundary-integral algorithm capable of modelling an emulsion flow through a granular material by direct multiparticle-multidrop simulations in a periodic box is developed and tested. The particles form a random arrangement at high volume fraction rigidly held in space (including the case of an equilibrium packing in mechanical contact). Deformable drops (with non-deformed diameter comparable with the particle size) squeeze between the particles under a specified average pressure gradient. The algorithm includes recent boundary-integral desingularization tools especially important for drop-solid and drop-drop interactions, the Hebeker representation for solid particle contributions, and unstructured surface triangulations with fixed topology. Multipole acceleration, with two levels of mesh node decomposition (entire drop/solid surfaces and "patches"), is a significant improvement over schemes used in previous, purely multidrop simulations; it remains efficient at very high resolutions ( 104- 105 triangular elements per surface) and has no lower limitation on the number of particles or drops. Such resolutions are necessary in the problem to alleviate lubrication difficulties, especially for near-critical squeezing conditions, as well as using ˜104 time steps and an iterative solution at each step, both for contrast and matching viscosities. Examples are shown for squeezing of 25-40 drops through an array of 9-14 solids, with the total volume fraction of 70% for particles and drops. The flow rates for the drop and continuous phases are calculated. Extensive convergence testing with respect to program parameters (triangulation, multipole truncation, etc.) is made.

  13. An efficient algorithm for numerical computations of continuous densities of states

    NASA Astrophysics Data System (ADS)

    Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.

    2016-06-01

    In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed

  14. A fast algorithm for Direct Numerical Simulation of natural convection flows in arbitrarily-shaped periodic domains

    NASA Astrophysics Data System (ADS)

    Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.

    2015-11-01

    A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.

  15. New Design Methods and Algorithms for Multi-component Distillation Processes

    SciTech Connect

    2009-02-01

    This factsheet describes a research project whose main goal is to develop methods and software tools for the identification and analysis of optimal multi-component distillation configurations for reduced energy consumption in industrial processes.

  16. Experimental assessment of an automatic breast density classification algorithm based on principal component analysis applied to histogram data

    NASA Astrophysics Data System (ADS)

    Angulo, Antonio; Ferrer, Jose; Pinto, Joseph; Lavarello, Roberto; Guerrero, Jorge; Castaneda, Benjamín.

    2015-01-01

    Breast parenchymal density is considered a strong indicator of cancer risk. However, measures of breast density are often qualitative and require the subjective judgment of radiologists. This work proposes a supervised algorithm to automatically assign a BI-RADS breast density score to a digital mammogram. The algorithm applies principal component analysis to the histograms of a training dataset of digital mammograms to create four different spaces, one for each BI-RADS category. Scoring is achieved by projecting the histogram of the image to be classified onto the four spaces and assigning it to the closest class. In order to validate the algorithm, a training set of 86 images and a separate testing database of 964 images were built. All mammograms were acquired in the craniocaudal view from female patients without any visible pathology. Eight experienced radiologists categorized the mammograms according to a BIRADS score and the mode of their evaluations was considered as ground truth. Results show better agreement between the algorithm and ground truth for the training set (kappa=0.74) than for the test set (kappa=0.44) which suggests the method may be used for BI-RADS classification but a better training is required.

  17. Improving Multi-Component Maintenance Acquisition with a Greedy Heuristic Local Algorithm

    DTIC Science & Technology

    2013-04-01

    need to improve the decision making process for system sustainment including maintenance, repair, and overhaul ( MRO ) operations and the acquisition of... MRO parts. To help address the link between sustainment policies and acquisition, this work develops a greedy heuristic?based local search algorithm to...concerns, there is a need to improve the decision making process for system sustainment, including maintenance, repair, and overhaul ( MRO

  18. Three-Dimensional Finite Element Based Numerical Simulation of Machining of Thin-Wall Components with Varying Wall Constraints

    NASA Astrophysics Data System (ADS)

    Joshi, Shrikrishna Nandkishor; Bolar, Gururaj

    2016-06-01

    Control of part deflection and deformation during machining of low rigidity thin-wall components is an important aspect in the manufacture of desired quality products. This paper presents a comparative study on the effect of geometry constraints on the product quality during machining of thin-wall components made of an aerospace alloy aluminum 2024-T351. Three-dimensional nonlinear finite element (FE) based simulations of machining of thin-wall parts were carried out by considering three variations in the wall constraint viz. free wall, wall constrained at one end, and wall with constraints at both the ends. Lagrangian formulation based transient FE model has been developed to simulate the interaction between the workpiece and helical milling cutter. Johnson-Cook material and damage model were adopted to account for material behavior during machining process; damage initiation and chip separation. A modified Coulomb friction model was employed to define the contact between the cutting tool and the workpiece. The numerical model was validated with experimental results and found to be in good agreement. Based on the simulation results it was noted that deflection and deformation were maximum in the thin-wall constrained at one end in comparison with those obtained in other cases. It was noted that three dimensional finite element simulations help in a better way to predict the product quality during precision manufacturing of thin-wall components.

  19. A Nested Genetic Algorithm for the Numerical Solution of Non-Linear Coupled Equations in Water Quality Modeling

    NASA Astrophysics Data System (ADS)

    García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson

    2010-05-01

    Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in

  20. A numerical algorithm to propagate navigation error covariance matrices associated with generalized strapdown inertial measurement units

    NASA Technical Reports Server (NTRS)

    Weir, Kent A.; Wells, Eugene M.

    1990-01-01

    The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.

  1. Near infrared optical tomography using NIRFAST: Algorithm for numerical model and image reconstruction

    PubMed Central

    Dehghani, Hamid; Eames, Matthew E.; Yalavarthy, Phaneendra K.; Davis, Scott C.; Srinivasan, Subhadra; Carpenter, Colin M.; Pogue, Brian W.; Paulsen, Keith D.

    2009-01-01

    SUMMARY Diffuse optical tomography, also known as near infrared tomography, has been under investigation, for non-invasive functional imaging of tissue, specifically for the detection and characterization of breast cancer or other soft tissue lesions. Much work has been carried out for accurate modeling and image reconstruction from clinical data. NIRFAST, a modeling and image reconstruction package has been developed, which is capable of single wavelength and multi-wavelength optical or functional imaging from measured data. The theory behind the modeling techniques as well as the image reconstruction algorithms is presented here, and 2D and 3D examples are presented to demonstrate its capabilities. The results show that 3D modeling can be combined with measured data from multiple wavelengths to reconstruct chromophore concentrations within the tissue. Additionally it is possible to recover scattering spectra, resulting from the dominant Mie-type scatter present in tissue. Overall, this paper gives a comprehensive over view of the modeling techniques used in diffuse optical tomographic imaging, in the context of NIRFAST software package. PMID:20182646

  2. Numerical Study of Equilibrium, Stability, and Advanced Resistive Wall Mode Feedback Algorithms on KSTAR

    NASA Astrophysics Data System (ADS)

    Katsuro-Hopkins, Oksana; Sabbagh, S. A.; Bialek, J. M.; Park, H. K.; Kim, J. Y.; You, K.-I.; Glasser, A. H.; Lao, L. L.

    2007-11-01

    Stability to ideal MHD kink/ballooning modes and the resistive wall mode (RWM) is investigated for the KSTAR tokamak. Free-boundary equilibria that comply with magnetic field coil current constraints are computed for monotonic and reversed shear safety factor profiles and H-mode tokamak pressure profiles. Advanced tokamak operation at moderate to low plasma internal inductance shows that a factor of two improvement in the plasma beta limit over the no-wall beta limit is possible for toroidal mode number of unity. The KSTAR conducting structure, passive stabilizers, and in-vessel control coils are modeled by the VALEN-3D code and the active RWM stabilization performance of the device is evaluated using both standard and advanced feedback algorithms. Steady-state power and voltage requirements for the system are estimated based on the expected noise on the RWM sensor signals. Using NSTX experimental RWM sensors noise data as input, a reduced VALEN state-space LQG controller is designed to realistically assess KSTAR stabilization system performance.

  3. The removal of wall components in Doppler ultrasound signals by using the empirical mode decomposition algorithm.

    PubMed

    Zhang, Yufeng; Gao, Yali; Wang, Le; Chen, Jianhua; Shi, Xinling

    2007-09-01

    Doppler ultrasound systems, used for the noninvasive detection of the vascular diseases, normally employ a high-pass filter (HPF) to remove the large, low-frequency components from the vessel wall from the blood flow signal. Unfortunately, the filter also removes the low-frequency Doppler signals arising from slow-moving blood. In this paper, we propose to use a novel technique, called the empirical mode decomposition (EMD), to remove the wall components from the mixed signals. The EMD is firstly to decompose a signal into a finite and usually small number of individual components named intrinsic mode functions (IMFs). Then a strategy based on the ratios between two adjacent values of the wall-to-blood signal ratio (WBSR) has been developed to automatically identify and remove the relevant IMFs that contribute to the wall components. This method is applied to process the simulated and clinical Doppler ultrasound signals. Compared with the results based on the traditional high-pass filter, the new approach obtains improved performance for wall components removal from the mixed signals effectively and objectively, and provides us with more accurate low blood flow.

  4. A Hybrid Color Space for Skin Detection Using Genetic Algorithm Heuristic Search and Principal Component Analysis Technique

    PubMed Central

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  5. CoFlame: A refined and validated numerical algorithm for modeling sooting laminar coflow diffusion flames

    NASA Astrophysics Data System (ADS)

    Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.

    2016-10-01

    Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.

  6. A parallel hybrid numerical algorithm for simulating gas flow and gas discharge of an atmospheric-pressure plasma jet

    NASA Astrophysics Data System (ADS)

    Lin, K.-M.; Hu, M.-H.; Hung, C.-T.; Wu, J.-S.; Hwang, F.-N.; Chen, Y.-S.; Cheng, G.

    2012-12-01

    Development of a hybrid numerical algorithm which couples weakly with the gas flow model (GFM) and the plasma fluid model (PFM) for simulating an atmospheric-pressure plasma jet (APPJ) and its acceleration by two approaches is presented. The weak coupling between gas flow and discharge is introduced by transferring between the results obtained from the steady-state solution of the GFM and cycle-averaged solution of the PFM respectively. Approaches of reducing the overall runtime include parallel computing of the GFM and the PFM solvers, and employing a temporal multi-scale method (TMSM) for PFM. Parallel computing of both solvers is realized using the domain decomposition method with the message passing interface (MPI) on distributed-memory machines. The TMSM considers only chemical reactions by ignoring the transport terms when integrating temporally the continuity equations of heavy species at each time step, and then the transport terms are restored only at an interval of time marching steps. The total reduction of runtime is 47% by applying the TMSM to the APPJ example presented in this study. Application of the proposed hybrid algorithm is demonstrated by simulating a parallel-plate helium APPJ impinging onto a substrate, which the cycle-averaged properties of the 200th cycle are presented. The distribution patterns of species densities are strongly correlated by the background gas flow pattern, which shows that consideration of gas flow in APPJ simulations is critical.

  7. Real-space, mean-field algorithm to numerically calculate long-range interactions

    NASA Astrophysics Data System (ADS)

    Cadilhe, A.; Costa, B. V.

    2016-02-01

    Long-range interactions are known to be of difficult treatment in statistical mechanics models. There are some approaches that introduce a cutoff in the interactions or make use of reaction field approaches. However, those treatments suffer the illness of being of limited use, in particular close to phase transitions. The use of open boundary conditions allows the sum of the long-range interactions over the entire system to be done, however, this approach demands a sum over all degrees of freedom in the system, which makes a numerical treatment prohibitive. Techniques like the Ewald summation or fast multipole expansion account for the exact interactions but are still limited to a few thousands of particles. In this paper we introduce a novel mean-field approach to treat long-range interactions. The method is based in the division of the system in cells. In the inner cell, that contains the particle in sight, the 'local' interactions are computed exactly, the 'far' contributions are then computed as the average over the particles inside a given cell with the particle in sight for each of the remaining cells. Using this approach, the large and small cells limits are exact. At a fixed cell size, the method also becomes exact in the limit of large lattices. We have applied the procedure to the two-dimensional anisotropic dipolar Heisenberg model. A detailed comparison between our method, the exact calculation and the cutoff radius approximation were done. Our results show that the cutoff-cell approach outperforms any cutoff radius approach as it maintains the long-range memory present in these interactions, contrary to the cutoff radius approximation. Besides that, we calculated the critical temperature and the critical behavior of the specific heat of the anisotropic Heisenberg model using our method. The results are in excellent agreement with extensive Monte Carlo simulations using Ewald summation.

  8. Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, numerics and applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George Em

    2014-11-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been

  9. Properties of and Algorithms for Fitting Three-Way Component Models with Offset Terms

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.

    2006-01-01

    Prior to a three-way component analysis of a three-way data set, it is customary to preprocess the data by centering and/or rescaling them. Harshman and Lundy (1984) considered that three-way data actually consist of a three-way model part, which in fact pertains to ratio scale measurements, as well as additive "offset" terms that turn the ratio…

  10. An Efficient Algorithm for Dynamic Analysis of Bridges Under Moving Vehicles Using a Coupled Modal and Physical Components Approach

    NASA Astrophysics Data System (ADS)

    Henchi, K.; Fafard, M.; Talbot, M.; Dhatt, G.

    1998-05-01

    A general and efficient method is proposed for the resolution of the dynamic interaction problem between a bridge, discretized by a three-dimensional finite element model, and a dynamic system of vehicles running at a prescribed speed. The resolution is easily performed with a step-by-step solution technique using the central difference scheme to solve the coupled equation system. This leads to a modified mass matrix called a pseudo-static matrix, for which its inverse is known at each time step without any numerical effort. The method uses a modal superposition technique for the bridge components. The coupled system vectors contain both physical and modal components. The physical components are the degrees of freedom of a vehicle modelled as linear discrete mass-spring-damper systems. The modal components are the degree of freedom of a linear finite element model of the bridge. In this context, the resolution of the eigenvalue problem for the bridge is indispensable. The elimination of the interaction forces between the two systems (bridge and vehicles) gives a unique coupled system (supersystem) containing the modal and physical components. In this study, we duly consider the bridge pavement as a random irregularity surface. The comparison between this study and the uncoupled iterative method is performed.

  11. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition- and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Juana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2017-01-01

    Processes driving the production, transformation and transport of methane (CH4) in wetland ecosystems are highly complex. We present a simple calculation algorithm to separate open-water CH4 fluxes measured with automatic chambers into diffusion- and ebullition-derived components. This helps to reveal underlying dynamics, to identify potential environmental drivers and, thus, to calculate reliable CH4 emission estimates. The flux separation is based on identification of ebullition-related sudden concentration changes during single measurements. Therefore, a variable ebullition filter is applied, using the lower and upper quartile and the interquartile range (IQR). Automation of data processing is achieved by using an established R script, adjusted for the purpose of CH4 flux calculation. The algorithm was validated by performing a laboratory experiment and tested using flux measurement data (July to September 2013) from a former fen grassland site, which converted into a shallow lake as a result of rewetting. Ebullition and diffusion contributed equally (46 and 55 %) to total CH4 emissions, which is comparable to ratios given in the literature. Moreover, the separation algorithm revealed a concealed shift in the diurnal trend of diffusive fluxes throughout the measurement period. The water temperature gradient was identified as one of the major drivers of diffusive CH4 emissions, whereas no significant driver was found in the case of erratic CH4 ebullition events.

  12. A single frequency component-based re-estimated MUSIC algorithm for impact localization on complex composite structures

    NASA Astrophysics Data System (ADS)

    Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng

    2015-10-01

    The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy.

  13. Analysis of seismic waves crossing the Santa Clara Valley using the three-component MUSIQUE array algorithm

    NASA Astrophysics Data System (ADS)

    Hobiger, Manuel; Cornou, Cécile; Bard, Pierre-Yves; Le Bihan, Nicolas; Imperatori, Walter

    2016-10-01

    We introduce the MUSIQUE algorithm and apply it to seismic wavefield recordings in California. The algorithm is designed to analyse seismic signals recorded by arrays of three-component seismic sensors. It is based on the MUSIC and the quaternion-MUSIC algorithms. In a first step, the MUSIC algorithm is applied in order to estimate the backazimuth and velocity of incident seismic waves and to discriminate between Love and possible Rayleigh waves. In a second step, the polarization parameters of possible Rayleigh waves are analysed using quaternion-MUSIC, distinguishing retrograde and prograde Rayleigh waves and determining their ellipticity. In this study, we apply the MUSIQUE algorithm to seismic wavefield recordings of the San Jose Dense Seismic Array. This array has been installed in 1999 in the Evergreen Basin, a sedimentary basin in the Eastern Santa Clara Valley. The analysis includes 22 regional earthquakes with epicentres between 40 and 600 km distant from the array and covering different backazimuths with respect to the array. The azimuthal distribution and the energy partition of the different surface wave types are analysed. Love waves dominate the wavefield for the vast majority of the events. For close events in the north, the wavefield is dominated by the first harmonic mode of Love waves, for farther events, the fundamental mode dominates. The energy distribution is different for earthquakes occurring northwest and southeast of the array. In both cases, the waves crossing the array are mostly arriving from the respective hemicycle. However, scattered Love waves arriving from the south can be seen for all earthquakes. Combining the information of all events, it is possible to retrieve the Love wave dispersion curves of the fundamental and the first harmonic mode. The particle motion of the fundamental mode of Rayleigh waves is retrograde and for the first harmonic mode, it is prograde. For both modes, we can also retrieve dispersion and ellipticity

  14. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

    PubMed

    Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  15. Numerical modeling of Non-isothermal two-phase two-component flow process with phase change phenomena in the porous media

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Shao, H.; Thullner, M.; Kolditz, O.

    2014-12-01

    In applications of Deep Geothermal reservoirs, thermal recovery processes, and contaminated groundwater sites, the multiphase multicomponent flow and transport processes are often considered the most important underlying physical process. In particular, the behavior of phase appearance and disappearance is the critical to the performance of many geo-reservoirs, and great interests exit in the scientific community to simulate this coupled process. This work is devoted to the modeling and simulation of two-phase, two components flow and transport in the porous medium, whereas the phase change behavior in non-isothermal conditions is considered. In this work, we have implemented the algorithm developed by Marchand, et al., into the open source scientific software OpenGeoSys. The governing equation is formulated in terms of molar fraction of the light component and mean pressure as the persistent primary variables, which leads to a fully coupled nonlinear PDE system. One of the important advantages of this approach is avoiding the primary variables switching between single phase and two phase zones, so that this uniform system can be applied to describe the behavior of phase change. On the other hand, due to the number of unkown variables closure relationships are also formulated to close the whole equation system by using the approach of complementarity constrains. For the numerical technical scheme: The standard Galerkin Finite element method is applied for space discretization, while a fully implicit scheme for the time discretization, and Newton-Raphson method is utilized for the global linearization, as well as the closure relationship. This model is verified based on one test case developed to simulate the heat pipe problem. This benchmark involves two-phase two-component flow in saturated/unsaturated porous media under non-isothermal condition, including phase change and mineral-water geochemical reactive transport processes. The simulation results will be

  16. Numerical Algorithms & Parallel Tasking.

    DTIC Science & Technology

    1985-09-12

    senior personnel have been supported under this contract: Virginia Klema, principal investigator (3.5 months), Elizabeth Ducot (2.25 months), and George...CONCURRENT ENVIRONMENT Elizabeth R. Ducot The purpose of this note is twofold. The first is to present the mechanisms by which a user activates and describes

  17. Static Analysis Numerical Algorithms

    DTIC Science & Technology

    2016-04-01

    abstract domain provides (1) an abstract type to represent concrete program states, and (2) abstract functions to represent the effect of concrete ...state-changing actions. Rather than simulate the concrete program, abstract interpretation uses abstract domains to construct and simulate an...On the other hand, the abstraction does allow us to cheaply compute some kinds of information about the concrete program. In the example, we can

  18. The 3D Kasteleyn transition in dipolar spin ice: a numerical study with the conserved monopoles algorithm.

    PubMed

    Baez, M L; Borzi, R A

    2017-02-08

    We study the three-dimensional Kasteleyn transition in both nearest neighbours and dipolar spin ice models using an algorithm that conserves the number of excitations. We first limit the interactions range to nearest neighbours to test the method in the presence of a field applied along [Formula: see text], and then focus on the dipolar spin ice model. The effect of dipolar interactions, which is known to be greatly self screened at zero field, is particularly strong near full polarization. It shifts the Kasteleyn transition to lower temperatures, which decreases  ≈0.4 K for the parameters corresponding to the best known spin ice materials, [Formula: see text] and [Formula: see text]. This shift implies effective dipolar fields as big as 0.05 T opposing the applied field, and thus favouring the creation of 'strings' of reversed spins. We compare the reduction in the transition temperature with results in previous experiments, and study the phenomenon quantitatively using a simple molecular field approach. Finally, we relate the presence of the effective residual field to the appearance of string-ordered phases at low fields and temperatures, and we check numerically that for fields applied along [Formula: see text] there are only three different stable phases at zero temperature.

  19. The 3D Kasteleyn transition in dipolar spin ice: a numerical study with the conserved monopoles algorithm

    NASA Astrophysics Data System (ADS)

    Baez, M. L.; Borzi, R. A.

    2017-02-01

    We study the three-dimensional Kasteleyn transition in both nearest neighbours and dipolar spin ice models using an algorithm that conserves the number of excitations. We first limit the interactions range to nearest neighbours to test the method in the presence of a field applied along ≤ft[1 0 0\\right] , and then focus on the dipolar spin ice model. The effect of dipolar interactions, which is known to be greatly self screened at zero field, is particularly strong near full polarization. It shifts the Kasteleyn transition to lower temperatures, which decreases  ≈0.4 K for the parameters corresponding to the best known spin ice materials, \\text{D}{{\\text{y}}2}\\text{T}{{\\text{i}}2}{{\\text{O}}7} and \\text{H}{{\\text{o}}2}\\text{T}{{\\text{i}}2}{{\\text{O}}7} . This shift implies effective dipolar fields as big as 0.05 T opposing the applied field, and thus favouring the creation of ‘strings’ of reversed spins. We compare the reduction in the transition temperature with results in previous experiments, and study the phenomenon quantitatively using a simple molecular field approach. Finally, we relate the presence of the effective residual field to the appearance of string-ordered phases at low fields and temperatures, and we check numerically that for fields applied along ≤ft[1 0 0\\right] there are only three different stable phases at zero temperature.

  20. Diagnosing basal cell carcinoma in vivo by near-infrared Raman spectroscopy: a Principal Components Analysis discrimination algorithm

    NASA Astrophysics Data System (ADS)

    Silveira, Landulfo, Jr.; Silveira, Fabrício L.; Bodanese, Benito; Pacheco, Marcos Tadeu T.; Zângaro, Renato A.

    2012-02-01

    This work demonstrated the discrimination among basal cell carcinoma (BCC) and normal human skin in vivo using near-infrared Raman spectroscopy. Spectra were obtained in the suspected lesion prior resectional surgery. After tissue withdrawn, biopsy fragments were submitted to histopathology. Spectra were also obtained in the adjacent, clinically normal skin. Raman spectra were measured using a Raman spectrometer (830 nm) with a fiber Raman probe. By comparing the mean spectra of BCC with the normal skin, it has been found important differences in the 800-1000 cm-1 and 1250-1350 cm-1 (vibrations of C-C and amide III, respectively, from lipids and proteins). A discrimination algorithm based on Principal Components Analysis and Mahalanobis distance (PCA/MD) could discriminate the spectra of both tissues with high sensitivity and specificity.

  1. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm

    PubMed Central

    2014-01-01

    Background The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. Methods In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Results Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Conclusions Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective. PMID:25474487

  2. Assessing sample entropy of physiological signals by the norm component matrix algorithm: application on muscular signals during isometric contraction.

    PubMed

    Castiglioni, Paolo; Żurek, Sebastian; Piskorski, Jaroslaw; Kośmider, Marcin; Guzik, Przemyslaw; Cè, Emiliano; Rampichini, Susanna; Merati, Giampiero

    2013-01-01

    Sample Entropy (SampEn) is a popular method for assessing the unpredictability of biological signals. Its calculation requires to preliminarily set the tolerance threshold r and the embedding dimension m. Even if most studies select m=2 and r=0.2 times the signal standard deviation, this choice is somewhat arbitrary. Effects of different r and m values on SampEn have been rarely assessed, because of the high computational burden of this task. Recently, however, a fast algorithm for estimating correlation sums (Norm Component Matrix, NCM) has been proposed that allows calculating SampEn quickly over wide ranges of r and m. The aim of our work is to describe the structure of SampEn of physiological signals with different complex dynamics as a function of m and r and in relation to the correlation sum. In particular, we investigate whether the criterion of "maximum entropy" for selecting r previously proposed for Approximate Entropy, also applies to SampEn; and whether information from correlation sums provides indications for the choice of r and m. For this aim we applied the NCM algorithm on electromyographic and mechanomyographic signals during isometric muscle contraction, estimating SampEn over wide ranges of r (0.01 ≤ r ≤ 5) and m (from 1 to 11). Results indicate that the "maximum entropy" criterion to select r in Approximate Entropy cannot be applied to SampEn. However, the analysis of correlation sums alternatively suggests to choose r that at any m maximizes the number of "escaping vectors", i.e., data points effectively contributing to the SampEn estimation.

  3. A Fetal Electrocardiogram Signal Extraction Algorithm Based on Fast One-Unit Independent Component Analysis with Reference

    PubMed Central

    2016-01-01

    Fetal electrocardiogram (FECG) extraction is very important procedure for fetal health assessment. In this article, we propose a fast one-unit independent component analysis with reference (ICA-R) that is suitable to extract the FECG. Most previous ICA-R algorithms only focused on how to optimize the cost function of the ICA-R and payed little attention to the improvement of cost function. They did not fully take advantage of the prior information about the desired signal to improve the ICA-R. In this paper, we first use the kurtosis information of the desired FECG signal to simplify the non-Gaussian measurement function and then construct a new cost function by directly using a nonquadratic function of the extracted signal to measure its non-Gaussianity. The new cost function does not involve the computation of the difference between the function of the Gaussian random vector and that of the extracted signal, which is time consuming. Centering and whitening are also used to preprocess the observed signal to further reduce the computation complexity. While the proposed method has the same error performance as other improved one-unit ICA-R methods, it actually has lower computation complexity than those other methods. Simulations are performed separately on artificial and real-world electrocardiogram signals. PMID:27703492

  4. REVIEW OF THE GOVERNING EQUATIONS, COMPUTATIONAL ALGORITHMS, AND OTHER COMPONENTS OF THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODELING SYSTEM

    EPA Science Inventory

    This article describes the governing equations, computational algorithms, and other components entering into the Community Multiscale Air Quality (CMAQ) modeling system. This system has been designed to approach air quality as a whole by including state-of-the-science capabiliti...

  5. Numerical simulation of two-dimensional heat transfer in composite bodies with application to de-icing of aircraft components. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Chao, D. F. K.

    1983-01-01

    Transient, numerical simulations of the de-icing of composite aircraft components by electrothermal heating were performed for a two dimensional rectangular geometry. The implicit Crank-Nicolson formulation was used to insure stability of the finite-difference heat conduction equations and the phase change in the ice layer was simulated using the Enthalpy method. The Gauss-Seidel point iterative method was used to solve the system of difference equations. Numerical solutions illustrating de-icer performance for various composite aircraft structures and environmental conditions are presented. Comparisons are made with previous studies. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.

  6. Two- and Three-Dimensional Numerical Experiments Representing Two Limiting Cases of an In-Line Pair of Finger Seal Components

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Steinetz, B. M.; Kudriavtsev, V. V.; Proctor, M. P.; Kiraly, L. James (Technical Monitor)

    2002-01-01

    The work presented here concerns the numerical development and simulation of the flow, pressure patterns and motion of a pair of fingers arranged behind each other and axially aligned in-line. The fingers represent the basic elemental component of a Finger Seal (FS) and form a tight seal around the rotor. Yet their flexibility allows compliance with rotor motion and in a passive-adaptive mode complies also with the hydrodynamic forces induced by the flowing fluid. While the paper does not treat the actual staggered configuration of a finger seal, the inline arrangement represents a first step towards that final goal. The numerical 2-D (axial-radial) and 3-D results presented herein were obtained using a commercial package (CFD-ACE+). Both models use an integrated numerical approach, which couples the hydrodynamic fluid model (Navier-Stokes based) to the solid mechanics code that models the compliance of the fingers.

  7. Contrasting sediment melt and fluid signatures for magma components in the Aeolian Arc: Implications for numerical modeling of subduction systems

    NASA Astrophysics Data System (ADS)

    Zamboni, Denis; Gazel, Esteban; Ryan, Jeffrey G.; Cannatelli, Claudia; Lucchi, Federico; Atlas, Zachary D.; Trela, Jarek; Mazza, Sarah E.; De Vivo, Benedetto

    2016-06-01

    The complex geodynamic evolution of Aeolian Arc in the southern Tyrrhenian Sea resulted in melts with some of the most pronounced along the arc geochemical variation in incompatible trace elements and radiogenic isotopes worldwide, likely reflecting variations in arc magma source components. Here we elucidate the effects of subducted components on magma sources along different sections of the Aeolian Arc by evaluating systematics of elements depleted in the upper mantle but enriched in the subducting slab, focusing on a new set of B, Be, As, and Li measurements. Based on our new results, we suggest that both hydrous fluids and silicate melts were involved in element transport from the subducting slab to the mantle wedge. Hydrous fluids strongly influence the chemical composition of lavas in the central arc (Salina) while a melt component from subducted sediments probably plays a key role in metasomatic reactions in the mantle wedge below the peripheral islands (Stromboli). We also noted similarities in subducting components between the Aeolian Archipelago, the Phlegrean Fields, and other volcanic arcs/arc segments around the world (e.g., Sunda, Cascades, Mexican Volcanic Belt). We suggest that the presence of melt components in all these locations resulted from an increase in the mantle wedge temperature by inflow of hot asthenospheric material from tears/windows in the slab or from around the edges of the sinking slab.

  8. Numerical Investigation of Thermal Distribution and Pressurization Behavior in Helium Pressurized Cryogenic Tank by Introducing a Multi-component Model

    NASA Astrophysics Data System (ADS)

    Lei, Wang; Yanzhong, Li; Zhan, Liu; Kang, Zhu

    An improved CFD model involving a multi-component gas mixturein the ullage is constructed to predict the pressurization behavior of a cryogenic tank considering the existence of pressurizing helium.A temperature difference between the local fluid and its saturation temperature corresponding to the vapor partial pressure is taken as the phase change driving force. As practical application of the model, hydrogen and oxygen tanks with helium pressurization arenumerically simulated by using themulti-component gas model. The results presentthat the improved model produce higher ullage temperature and pressure and lower wall temperaturethan those without multi-component consideration. The phase change has a slight influence on thepressurization performance due to the small quantities involved.

  9. Numerical simulation of cesium and strontium migration through sodium bentonite altered by cation exchange with groundwater components

    SciTech Connect

    Jacobsen, J.S.; Carnahan, C.L.

    1988-10-01

    Numerical simulations have been used to investigate how spatial and temporal changes in the ion exchange properties of bentonite affect the migration of cationic fission products from high-level waste. Simulations in which fission products compete for exchange sites with ions present in groundwater diffusing into the bentonite are compared to simulations in which the exchange properties of bentonite are constant. 12 refs., 3 figs., 2 tabs.

  10. Electron energy distribution function in plasma determined using numerical simulations of multiple harmonic components on Langmuir probe characteristic: efficiency of the method.

    PubMed

    Jauberteau, J L; Jauberteau, I

    2007-04-01

    The method proposed to determine the electron energy distribution is based on the numerical simulation of the effect induced by a sinusoidal perturbation superimposed to the direct current voltage applied to the probe. The simulation is generating a multiple harmonic components signal over the rough experimental data. Each harmonic component can be isolated by means of finite impulse response filters. Then, the second derivative is deduced from the second harmonic component using the Taylor expansion. The efficiency of the method is proved first on simple cases and second on typical Langmuir probes characteristics recorded in the expansion of a microwave plasma containing argon or nitrogen-hydrogen gas mixture. Results obtained using this method are compared to those, which are determined using a classical Savitzsky-Golay filter.

  11. Electron energy distribution function in plasma determined using numerical simulations of multiple harmonic components on Langmuir probe characteristic--Efficiency of the method

    SciTech Connect

    Jauberteau, J. L.; Jauberteau, I.

    2007-04-15

    The method proposed to determine the electron energy distribution is based on the numerical simulation of the effect induced by a sinusoidal perturbation superimposed to the direct current voltage applied to the probe. The simulation is generating a multiple harmonic components signal over the rough experimental data. Each harmonic component can be isolated by means of finite impulse response filters. Then, the second derivative is deduced from the second harmonic component using the Taylor expansion. The efficiency of the method is proved first on simple cases and second on typical Langmuir probes characteristics recorded in the expansion of a microwave plasma containing argon or nitrogen-hydrogen gas mixture. Results obtained using this method are compared to those, which are determined using a classical Savitzsky-Golay filter.

  12. Numerical Study on the Effect of Substrate Angle on Particle Impact Velocity and Normal Velocity Component in Cold Gas Dynamic Spraying Based on CFD

    NASA Astrophysics Data System (ADS)

    Yin, Shuo; Wang, Xiao-Fang; Li, Wen-Ya; Xu, Bao-Peng

    2010-12-01

    Numerical study was conducted to investigate the effect of substrate angle on particle impact velocity and normal velocity component in cold gas dynamic spraying by using three-dimensional models based on computational fluid dynamics. It was found that the substrate angle has significant effect on particle impact velocity and normal velocity component. With increasing the substrate angle, the bow shock strength becomes increasingly weak, which results in a gradual rise in particle impact velocity. The distribution of the impact velocity presents a linearly increase along the substrate centerline due to the existence of the substrate angle and the growth rate rises gradually with increasing the substrate angle. Furthermore, the normal velocity component reduces steeply with the increase in substrate angle, which may result in a sharp decrease in deposition efficiency. In addition, the study on the influence of procedure parameters showed that gas pressure, temperature, type, and particle size also play an important role in particle acceleration.

  13. Numerical modeling of submarine landslide-generated tsunamis as a component of the Alaska Tsunami Inundation Mapping Project

    USGS Publications Warehouse

    Suleimani, E.; Lee, H.; Haeussler, Peter J.; Hansen, R.

    2006-01-01

    Tsunami waves are a threat for manyAlaska coastal locations, and community preparedness plays an important role in saving lives and property. The GeophysicalInstitute of the University of Alaska Fairbanks participates in the National Tsunami Hazard Mitigation Program by evaluating andmapping potential tsunami inundation of selected coastal communities in Alaska. We develop hypothetical tsunamiscenarios based on the parameters of potential underwater earthquakes and landslides for a specified coastal community. The modeling results are delivered to the community for localtsunami hazard planning and construction of evacuation maps. For the community of Seward, located at the head of Resurrection Bay, tsunami potential from tectonic and submarinelandslide sources must be evaluated for comprehensiveinundation mapping. Recent multi-beam and high-resolution sub-bottom profile surveys of Resurrection Bay show medium- and large-sized blocks, which we interpret as landslide debris that slid in the 1964 earthquake. Numerical modeling of the 1964 underwater slides and tsunamis will help to validate and improve the models. In order to construct tsunami inundation maps for Seward, we combine two different approaches for estimating tsunami risk. First, we observe inundation and runup due to tsunami waves generated by the 1964 earthquake. Next we model tsunami wave dynamics in Resurrection Bay caused by superposition of the local landslide- generated waves and the major tectonic tsunami. We compare modeled and observed values from 1964 to calibrate the numerical tsunami model. In our second approach, we perform a landslide tsunami hazard assessment using underwater slope stability analysis and available characteristics of potentially unstable sediment bodies. The approach produces hypothetical underwater slides and resulting tsunami waves. We use a three-dimensional numerical model of an incompressible viscous slide with full interaction between the slide

  14. COBRA-NC: a thermal hydraulics code for transient analysis of nuclear reactor components. Volume 2. COBRA-NC numerical solution methods

    SciTech Connect

    Thurgood, M.J.; George, T.L.; Wheeler, C.L.

    1986-04-01

    The COBRA-NC computer program has been developed to predict the thermal-hydraulic response of nuclear reactor components to thermal-hydraulic transients. The code solves the multicomponent, compressible three-dimensional, two-fluid, three-field equations for two-phase flow. The three fields are the vapor field, the continuous liquid field, and the liquid drop field. The code has been used to model flow and heat transfer within the reactor core, the reactor vessel, the steam generators, and in the nuclear containment. This volume describes the finite-volume equations and the numerical solution methods used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of the numerical methods used to obtain a solution to the hydrodynamic equations.

  15. Update of upper level turbulence forecast by reducing unphysical components of topography in the numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Park, Sang-Hun; Kim, Jung-Hoon; Sharman, Robert D.; Klemp, Joseph B.

    2016-07-01

    On 2 November 2015, unrealistically large areas of light-or-stronger turbulence were predicted by the WRF-RAP (Weather Research and Forecast Rapid Refresh)-based operational turbulence forecast system over the western U.S. mountainous regions, which were not supported by available observations. These areas are reduced by applying additional terrain averaging, which damps out the unphysical components of small-scale (~2Δx) energy aloft induced by unfiltered topography in the initialization of the WRF model. First, a control simulation with the same design of the WRF-RAP model shows that the large-scale atmospheric conditions are well simulated but predict strong turbulence over the western mountainous region. Four experiments with different levels of additional terrain smoothing are applied in the initialization of the model integrations, which significantly reduce spurious mountain-wave-like features, leading to better turbulence forecasts more consistent with the observed data.

  16. A high-order numerical algorithm for DNS of low-Mach-number reactive flows with detailed chemistry and quasi-spectral accuracy

    NASA Astrophysics Data System (ADS)

    Motheau, E.; Abraham, J.

    2016-05-01

    A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.

  17. A Numerical Algorithm to Calculate the Pressure Distribution of the TPS Front End Due to Desorption Induced by Synchrotron Radiation

    NASA Astrophysics Data System (ADS)

    Sheng, I. C.; Kuan, C. K.; Chen, Y. T.; Yang, J. Y.; Hsiung, G. Y.; Chen, J. R.

    2010-06-01

    The pressure distribution is an important aspect of a UHV subsystem in either a storage ring or a front end. The design of the 3-GeV, 400-mA Taiwan Photon Source (TPS) foresees outgassing induced by photons and due to a bending magnet and an insertion device. An algorithm to calculate the photon-stimulated absorption (PSD) due to highly energetic radiation from a synchrotron source is presented. Several results using undulator sources such as IU20 are also presented, and the pressure distribution is illustrated.

  18. Middle atmosphere project: A radiative heating and cooling algorithm for a numerical model of the large scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    Wehrbein, W. M.; Leovy, C. B.

    1981-01-01

    A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.

  19. Theory of axially symmetric cusped focusing: numerical evaluation of a Bessoid integral by an adaptive contour algorithm

    NASA Astrophysics Data System (ADS)

    Kirk, N. P.; Connor, J. N. L.; Curtis, P. R.; Hobbs, C. A.

    2000-07-01

    A numerical procedure for the evaluation of the Bessoid canonical integral J({x,y}) is described. J({x,y}) is defined, for x and y real, by eq1 where J0(·) is a Bessel function of order zero. J({x,y}) plays an important role in the description of cusped focusing when there is axial symmetry present. It arises in the diffraction theory of aberrations, in the design of optical instruments and of highly directional microwave antennas and in the theory of image formation for high-resolution electron microscopes. The numerical procedure replaces the integration path along the real t axis with a more convenient contour in the complex t plane, thereby rendering the oscillatory integrand more amenable to numerical quadrature. The computations use a modified version of the CUSPINT computer code (Kirk et al 2000 Comput. Phys. Commun. at press), which evaluates the cuspoid canonical integrals and their first-order partial derivatives. Plots and tables of J({x,y}) and its zeros are presented for the grid -8.0≤x≤8.0 and -8.0≤y≤8.0. Some useful series expansions of J({x,y}) are also derived.

  20. Experimental and numerical investigations on tailored tempering process of a U-channel component with tailored mechanical properties

    SciTech Connect

    Tang, B. T.; Bruschi, S.; Ghiotti, A.; Bariani, P. F.

    2013-12-16

    Hot stamping of quenchenable ultra high strength steels currently represents a promising forming technology for the manufacturing of safety and crash relevant parts. For some applications, such as B-pillars and other structural components that may undergo impact loading, it may be desirable to create regions of the part with tailored mechanical properties. In the paper, a laboratory-scale hot stamped U-channel was manufactured by using a segmented die, which was heated by cartridge heaters and cooled by water channels independently. Local hardness values as low as 289 HV can be achieved using a heated die temperature of 400°C while maintaining a hardness level of 490 HV in the fully cooled region. If the die temperature was increased to 450°C, the Vickers hardness of elements in the heated region was 227 HV, with a reduction in hardness of more than 50%. Optical microscopy was used to verify the microstructure of the as-quenched phases with respect to the heated die temperatures. The FE model of the lab-scale process was developed to capture the overall hardness trends that were observed in the experiments.

  1. Laboratory-scale experiments and numerical modeling of cosolvent flushing of multi-component NAPLs in saturated porous media

    NASA Astrophysics Data System (ADS)

    Agaoglu, Berken; Scheytt, Traugott; Copty, Nadim K.

    2012-10-01

    This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations was also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with low flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. Model simulations over-estimated NAPL recovery for high specific discharges and rate-limited mass transfer, suggesting a constant mass transfer coefficient for the entire flushing experiment may not be valid. When multi-component NAPLs are present, the dissolution rate of individual organic compounds (namely, toluene and benzene) into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values.

  2. Construction of an extended invariant for an arbitrary ordinary differential equation with its development in a numerical integration algorithm.

    PubMed

    Fukuda, Ikuo; Nakamura, Haruki

    2006-02-01

    For an arbitrary ordinary differential equation (ODE), a scheme for constructing an extended ODE endowed with a time-invariant function is here proposed. This scheme enables us to examine the accuracy of the numerical integration of an ODE that may itself have had no invariant. These quantities are constructed by referring to the Nosé-Hoover molecular dynamics equation and its related conserved quantity. By applying this procedure to several molecular dynamics equations, the conventional conserved quantity individually defined in each dynamics can be reproduced in a uniform, generalized way; our concept allows a transparent outlook underlying these quantities and ideas. Developing the technique, for a certain class of ODEs we construct a numerical integrator that is not only explicit and symmetric, but preserves a unit Jacobian for a suitably defined extended ODE, which also provides an invariant. Our concept is thus to simply build a divergence-free extended ODE whose solution is just a lift-up of the original ODE, and to constitute an efficient integrator that preserves the phase-space volume on the extended system. We present precise discussions about the general mathematical properties of the integrator and provide specific conditions that should be incorporated for practical applications.

  3. Validation of the Algorithm for Base Direct Material Cost for the Component Support Cost System (D160B).

    DTIC Science & Technology

    2014-09-26

    implemented by the CSCS. Finally, a critique of the algorithm is provided as required by the contract. It addresses the following topics: o...studied. Assumptions about data processing procedures were made explicit. When necessary, Air Force personnel involved in implementation of the D160B...inconsistencies or voids were identified and resolved through contact with the Office of VAMOSC and/or implementing personnel. Whenever appropriate

  4. Direct Numerical Simulation of Boiling Multiphase Flows: State-of-the-Art, Modeling, Algorithmic and Computer Needs

    SciTech Connect

    Nourgaliev R.; Knoll D.; Mousseau V.; Berry R.

    2007-04-01

    The state-of-the-art for Direct Numerical Simulation (DNS) of boiling multiphase flows is reviewed, focussing on potential of available computational techniques, the level of current success for their applications to model several basic flow regimes (film, pool-nucleate and wall-nucleate boiling -- FB, PNB and WNB, respectively). Then, we discuss multiphysics and multiscale nature of practical boiling flows in LWR reactors, requiring high-fidelity treatment of interfacial dynamics, phase-change, hydrodynamics, compressibility, heat transfer, and non-equilibrium thermodynamics and chemistry of liquid/vapor and fluid/solid-wall interfaces. Finally, we outline the framework for the {\\sf Fervent} code, being developed at INL for DNS of reactor-relevant boiling multiphase flows, with the purpose of gaining insight into the physics of multiphase flow regimes, and generating a basis for effective-field modeling in terms of its formulation and closure laws.

  5. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    competitive on the basis of speed , power, and noise immunity. This circuit is used at 3V and 15mW, but is rated at 8 V and 100 mW. Unpowered...algorithm using these inexact adders and multipliers. In the interests of high- speed simulation and of collecting large sample sizes, these simulations were...2isi. (28) For most applications, the ripple-carry adder is the slowest adder architecture. There are many adders which are optimized for speed , for

  6. Parallel Newton-Krylov-Schwarz algorithms for the three-dimensional Poisson-Boltzmann equation in numerical simulation of colloidal particle interactions

    NASA Astrophysics Data System (ADS)

    Hwang, Feng-Nan; Cai, Shang-Rong; Shao, Yun-Long; Wu, Jong-Shinn

    2010-09-01

    We investigate fully parallel Newton-Krylov-Schwarz (NKS) algorithms for solving the large sparse nonlinear systems of equations arising from the finite element discretization of the three-dimensional Poisson-Boltzmann equation (PBE), which is often used to describe the colloidal phenomena of an electric double layer around charged objects in colloidal and interfacial science. The NKS algorithm employs an inexact Newton method with backtracking (INB) as the nonlinear solver in conjunction with a Krylov subspace method as the linear solver for the corresponding Jacobian system. An overlapping Schwarz method as a preconditioner to accelerate the convergence of the linear solver. Two test cases including two isolated charged particles and two colloidal particles in a cylindrical pore are used as benchmark problems to validate the correctness of our parallel NKS-based PBE solver. In addition, a truly three-dimensional case, which models the interaction between two charged spherical particles within a rough charged micro-capillary, is simulated to demonstrate the applicability of our PBE solver to handle a problem with complex geometry. Finally, based on the result obtained from a PC cluster of parallel machines, we show numerically that NKS is quite suitable for the numerical simulation of interaction between colloidal particles, since NKS is robust in the sense that INB is able to converge within a small number of iterations regardless of the geometry, the mesh size, the number of processors. With help of an additive preconditioned Krylov subspace method NKS achieves parallel efficiency of 71% or better on up to a hundred processors for a 3D problem with 5 million unknowns.

  7. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; ...

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  8. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    SciTech Connect

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  9. Validation of the Algorithm for Base Maintenance Overhead Costs for the Component Support Cost System (D160B).

    DTIC Science & Technology

    2014-09-26

    FOR I BASE MAINTENANCE OVERHEAD COSTS FOR THE COMPONENT SUPPORT COST SYSTEM (D160B) Contract No. F33600-82-C-0543 DTIC 13 December 1983 S ELECTED JUL 1...PREFIXES SRD Prefix Definition A-- Aircraft and Drones G-- Support Equipment H-- Precision Measurement Equipment N-- Air launched missiles and Guided...Missiles; Ground- O Launched Missiles; Except ICBMS; Drones ; and Related Training Equipment, 1 August 1976, updated to 15 October 1982 [221 TO-00-20-2-l0

  10. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    PubMed

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  11. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  12. Communication: Four-component density matrix renormalization group

    SciTech Connect

    Knecht, Stefan Reiher, Markus; Legeza, Örs

    2014-01-28

    We present the first implementation of the relativistic quantum chemical two- and four-component density matrix renormalization group algorithm that includes a variational description of scalar-relativistic effects and spin–orbit coupling. Numerical results based on the four-component Dirac–Coulomb Hamiltonian are presented for the standard reference molecule for correlated relativistic benchmarks: thallium hydride.

  13. A real-time algorithm for the harmonic estimation and frequency tracking of dominant components in fusion plasma magnetic diagnostics.

    PubMed

    Alves, D; Coelho, R

    2013-08-01

    The real-time tracking of instantaneous quantities such as frequency, amplitude, and phase of components immerse in noisy signals has been a common problem in many scientific and engineering fields such as power systems and delivery, telecommunications, and acoustics for the past decades. In magnetically confined fusion research, extracting this sort of information from magnetic signals can be of valuable assistance in, for instance, feedback control of detrimental magnetohydrodynamic modes and disruption avoidance mechanisms by monitoring instability growth or anticipating mode-locking events. This work is focused on nonlinear Kalman filter based methods for tackling this problem. Similar methods have already proven their merits and have been successfully employed in this scientific domain in applications such as amplitude demodulation for the motional Stark effect diagnostic. In the course of this work, three approaches are described, compared, and discussed using magnetic signals from the Joint European Torus tokamak plasma discharges for benchmarking purposes.

  14. A real-time algorithm for the harmonic estimation and frequency tracking of dominant components in fusion plasma magnetic diagnostics

    SciTech Connect

    Alves, D.; Coelho, R. [Associação Euratom Collaboration: JET-EFDA Contributors

    2013-08-15

    The real-time tracking of instantaneous quantities such as frequency, amplitude, and phase of components immerse in noisy signals has been a common problem in many scientific and engineering fields such as power systems and delivery, telecommunications, and acoustics for the past decades. In magnetically confined fusion research, extracting this sort of information from magnetic signals can be of valuable assistance in, for instance, feedback control of detrimental magnetohydrodynamic modes and disruption avoidance mechanisms by monitoring instability growth or anticipating mode-locking events. This work is focused on nonlinear Kalman filter based methods for tackling this problem. Similar methods have already proven their merits and have been successfully employed in this scientific domain in applications such as amplitude demodulation for the motional Stark effect diagnostic. In the course of this work, three approaches are described, compared, and discussed using magnetic signals from the Joint European Torus tokamak plasma discharges for benchmarking purposes.

  15. Daily PM2.5 concentration prediction based on principal component analysis and LSSVM optimized by cuckoo search algorithm.

    PubMed

    Sun, Wei; Sun, Jingyi

    2017-03-01

    Increased attention has been paid to PM2.5 pollution in China. Due to its detrimental effects on environment and health, it is important to establish a PM2.5 concentration forecasting model with high precision for its monitoring and controlling. This paper presents a novel hybrid model based on principal component analysis (PCA) and least squares support vector machine (LSSVM) optimized by cuckoo search (CS). First PCA is adopted to extract original features and reduce dimension for input selection. Then LSSVM is applied to predict the daily PM2.5 concentration. The parameters in LSSVM are fine-tuned by CS to improve its generalization. An experiment study reveals that the proposed approach outperforms a single LSSVM model with default parameters and a general regression neural network (GRNN) model in PM2.5 concentration prediction. Therefore the established model presents the potential to be applied to air quality forecasting systems.

  16. Universality in numerical computations with random data.

    PubMed

    Deift, Percy A; Menon, Govind; Olver, Sheehan; Trogdon, Thomas

    2014-10-21

    The authors present evidence for universality in numerical computations with random data. Given a (possibly stochastic) numerical algorithm with random input data, the time (or number of iterations) to convergence (within a given tolerance) is a random variable, called the halting time. Two-component universality is observed for the fluctuations of the halting time--i.e., the histogram for the halting times, centered by the sample average and scaled by the sample variance, collapses to a universal curve, independent of the input data distribution, as the dimension increases. Thus, up to two components--the sample average and the sample variance--the statistics for the halting time are universally prescribed. The case studies include six standard numerical algorithms as well as a model of neural computation and decision-making. A link to relevant software is provided for readers who would like to do computations of their own.

  17. Middleware for dynamic adaptation of component applications.

    SciTech Connect

    Norris, B.; Bhowmick, S.; Kaushik, D.; McInnes, L. C.

    2007-01-01

    Component- and service-based software engineering approaches have been gaining popularity in high-performance scientific computing, facilitating the creation and management of large multidisciplinary, multideveloper applications, and providing opportunities for improved performance and numerical accuracy. These software engineering approaches enable the development of middleware infrastructure for computational quality of service (CQoS), which provides performance optimizations through dynamic algorithm selection and configuration in a mostly automated fashion. The factors that affect performance are closely tied to a component's parallel implementation, its management of parallel communication and memory, the algorithms executed, the algorithmic parameters employed, and other operational characteristics. We present the design of a component middleware CQoS architecture for automated composition and adaptation of high-performance component- or service-based applications. We describe its initial implementation and corresponding experimental results for parallel simulations involving time-dependent nonlinear partial differential equations.

  18. Algorithms and uncertainties for the determination of multispectral irradiance components and aerosol optical depth from a shipborne rotating shadowband radiometer

    NASA Astrophysics Data System (ADS)

    Witthuhn, Jonas; Deneke, Hartwig; Macke, Andreas; Bernhard, Germar

    2017-03-01

    The 19-channel rotating shadowband radiometer GUVis-3511 built by Biospherical Instruments provides automated shipborne measurements of the direct, diffuse and global spectral irradiance components without a requirement for platform stabilization. Several direct sun products, including spectral direct beam transmittance, aerosol optical depth, Ångström exponent and precipitable water, can be derived from these observations. The individual steps of the data analysis are described, and the different sources of uncertainty are discussed. The total uncertainty of the observed direct beam transmittances is estimated to be about 4 % for most channels within a 95 % confidence interval for shipborne operation. The calibration is identified as the dominating contribution to the total uncertainty. A comparison of direct beam transmittance with those obtained from a Cimel sunphotometer at a land site and a manually operated Microtops II sunphotometer on a ship is presented. Measurements deviate by less than 3 and 4 % on land and on ship, respectively, for most channels and in agreement with our previous uncertainty estimate. These numbers demonstrate that the instrument is well suited for shipborne operation, and the applied methods for motion correction work accurately. Based on spectral direct beam transmittance, aerosol optical depth can be retrieved with an uncertainty of 0.02 for all channels within a 95 % confidence interval. The different methods to account for Rayleigh scattering and gas absorption in our scheme and in the Aerosol Robotic Network processing for Cimel sunphotometers lead to minor deviations. Relying on the cross calibration of the 940 nm water vapor channel with the Cimel sunphotometer, the column amount of precipitable water can be estimated with an uncertainty of ±0.034 cm.

  19. High order hybrid numerical simulations of two dimensional detonation waves

    NASA Technical Reports Server (NTRS)

    Cai, Wei

    1993-01-01

    In order to study multi-dimensional unstable detonation waves, a high order numerical scheme suitable for calculating the detailed transverse wave structures of multidimensional detonation waves was developed. The numerical algorithm uses a multi-domain approach so different numerical techniques can be applied for different components of detonation waves. The detonation waves are assumed to undergo an irreversible, unimolecular reaction A yields B. Several cases of unstable two dimensional detonation waves are simulated and detailed transverse wave interactions are documented. The numerical results show the importance of resolving the detonation front without excessive numerical viscosity in order to obtain the correct cellular patterns.

  20. Ground-based network observation using Mie-Raman lidars and multi-wavelength Raman lidars and algorithm to retrieve distributions of aerosol components

    NASA Astrophysics Data System (ADS)

    Nishizawa, Tomoaki; Sugimoto, Nobuo; Matsui, Ichiro; Shimizu, Atsushi; Hara, Yukari; Itsushi, Uno; Yasunaga, Kazuaki; Kudo, Rei; Kim, Sang-Woo

    2017-02-01

    We improved two-wavelength polarization Mie-scattering lidars at several main sites of the Asian dust and aerosol lidar observation network (AD-Net) by adding a nitrogen Raman scatter measurement channel at 607 nm and have conducted ground-based network observation with the improved Mie-Raman lidars (MRL) in East Asia since 2009. This MRL provides 1α+2β+1δ data at nighttime: extinction coefficient (α532), backscatter coefficient (β532), and depolarization ratio (δ532) of particles at 532 nm and an attenuated backscatter coefficient at 1064 nm (βat,1064). Furthermore, we developed a Multi-wavelength Mie-Raman lidar (MMRL) providing 2α+3β+2δ data (α at 355 and 532 nm; β at 355 and 532; βat at 1064 nm; and δ at 355 and 532 nm) and constructed MMRLs at several main sites of the AD-Net. We identified an aerosol-rich layer and height of the planetary boundary layer (PBL) using βat,1064 data, and derived aerosol optical properties (AOPs, for example, αa, βa, δa, and lidar ratio (Sa)). We demonstrated that AOPs cloud be derived with appropriate accuracy. Seasonal means of AOPs in the PBL were evaluated for each MRL observation site using three-year data from 2010 through 2012; the AOPs changed according to each season and region. For example, Sa,532 at Fukue, Japan, were 44±15 sr in winter and 49±17 in summer; those at Seoul, Korea, were 56±18 sr in winter and 62±15 sr in summer. We developed an algorithm to estimate extinction coefficients at 532 nm for black carbon, dust, sea-salt, and air-pollution aerosols consisting of a mixture of sulfate, nitrate, and organic-carbon substances using the 1α532+2β532 and 1064+1δ532 data. With this method, we assume an external mixture of aerosol components and prescribe their size distributions, refractive indexes, and particle shapes. We applied the algorithm to the observed data to demonstrate the performance of the algorithm and determined the vertical structure for each aerosol component.

  1. Optimizing the distribution of resources between enzymes of carbon metabolism can dramatically increase photosynthetic rate: a numerical simulation using an evolutionary algorithm.

    PubMed

    Zhu, Xin-Guang; de Sturler, Eric; Long, Stephen P

    2007-10-01

    The distribution of resources between enzymes of photosynthetic carbon metabolism might be assumed to have been optimized by natural selection. However, natural selection for survival and fecundity does not necessarily select for maximal photosynthetic productivity. Further, the concentration of a key substrate, atmospheric CO(2), has changed more over the past 100 years than the past 25 million years, with the likelihood that natural selection has had inadequate time to reoptimize resource partitioning for this change. Could photosynthetic rate be increased by altered partitioning of resources among the enzymes of carbon metabolism? This question is addressed using an "evolutionary" algorithm to progressively search for multiple alterations in partitioning that increase photosynthetic rate. To do this, we extended existing metabolic models of C(3) photosynthesis by including the photorespiratory pathway (PCOP) and metabolism to starch and sucrose to develop a complete dynamic model of photosynthetic carbon metabolism. The model consists of linked differential equations, each representing the change of concentration of one metabolite. Initial concentrations of metabolites and maximal activities of enzymes were extracted from the literature. The dynamics of CO(2) fixation and metabolite concentrations were realistically simulated by numerical integration, such that the model could mimic well-established physiological phenomena. For example, a realistic steady-state rate of CO(2) uptake was attained and then reattained after perturbing O(2) concentration. Using an evolutionary algorithm, partitioning of a fixed total amount of protein-nitrogen between enzymes was allowed to vary. The individual with the higher light-saturated photosynthetic rate was selected and used to seed the next generation. After 1,500 generations, photosynthesis was increased substantially. This suggests that the "typical" partitioning in C(3) leaves might be suboptimal for maximizing the light

  2. Predicting regional emissions and near-field air concentrations of soil fumigants using modest numerical algorithms: a case study using 1,3-dichloropropene.

    PubMed

    Cryer, S A; van Wesenbeeck, I J; Knuteson, J A

    2003-05-21

    Soil fumigants, used to control nematodes and crop disease, can volatilize from the soil application zone and into the atmosphere to create the potential for human inhalation exposure. An objective for this work is to illustrate the ability of simple numerical models to correctly predict pesticide volatilization rates from agricultural fields and to expand emission predictions to nearby air concentrations for use in the exposure component of a risk assessment. This work focuses on a numerical system using two U.S. EPA models (PRZM3 and ISCST3) to predict regional volatilization and nearby air concentrations for the soil fumigant 1,3-dichloropropene. New approaches deal with links to regional databases, seamless coupling of emission and dispersion models, incorporation of Monte Carlo sampling techniques to account for parametric uncertainty, and model input sensitivity analysis. Predicted volatility flux profiles of 1,3-dichloropropene (1,3-D) from soil for tarped and untarped fields were compared against field data and used as source terms for ISCST3. PRZM3 can successfully estimate correct order of magnitude regional soil volatilization losses of 1,3-D when representative regional input parameters are used (soil, weather, chemical, and management practices). Estimated 1,3-D emission losses and resulting air concentrations were investigated for five geographically diverse regions. Air concentrations (15-day averages) are compared with the current U.S. EPA's criteria for human exposure and risk assessment to determine appropriate setback distances from treated fields. Sensitive input parameters for volatility losses were functions of the region being simulated.

  3. Optimizing the fabrication process and interplay of device components of polymer solar cells using a field-based multiscale solar-cell algorithm.

    PubMed

    Donets, Sergii; Pershin, Anton; Baeurle, Stephan A

    2015-05-14

    Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes

  4. Optimizing the fabrication process and interplay of device components of polymer solar cells using a field-based multiscale solar-cell algorithm

    NASA Astrophysics Data System (ADS)

    Donets, Sergii; Pershin, Anton; Baeurle, Stephan A.

    2015-05-01

    Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes

  5. Optimizing the fabrication process and interplay of device components of polymer solar cells using a field-based multiscale solar-cell algorithm

    SciTech Connect

    Donets, Sergii; Pershin, Anton; Baeurle, Stephan A.

    2015-05-14

    Both the device composition and fabrication process are well-known to crucially affect the power conversion efficiency of polymer solar cells. Major advances have recently been achieved through the development of novel device materials and inkjet printing technologies, which permit to improve their durability and performance considerably. In this work, we demonstrate the usefulness of a recently developed field-based multiscale solar-cell algorithm to investigate the influence of the material characteristics, like, e.g., electrode surfaces, polymer architectures, and impurities in the active layer, as well as post-production treatments, like, e.g., electric field alignment, on the photovoltaic performance of block-copolymer solar-cell devices. Our study reveals that a short exposition time of the polymer bulk heterojunction to the action of an external electric field can lead to a low photovoltaic performance due to an incomplete alignment process, leading to undulated or disrupted nanophases. With increasing exposition time, the nanophases align in direction to the electric field lines, resulting in an increase of the number of continuous percolation paths and, ultimately, in a reduction of the number of exciton and charge-carrier losses. Moreover, we conclude by modifying the interaction strengths between the electrode surfaces and active layer components that a too low or too high affinity of an electrode surface to one of the components can lead to defective contacts, causing a deterioration of the device performance. Finally, we infer from the study of block-copolymer nanoparticle systems that particle impurities can significantly affect the nanostructure of the polymer matrix and reduce the photovoltaic performance of the active layer. For a critical volume fraction and size of the nanoparticles, we observe a complete phase transformation of the polymer nanomorphology, leading to a drop of the internal quantum efficiency. For other particle-numbers and -sizes

  6. Principal component analysis- adaptive neuro-fuzzy inference system modeling and genetic algorithm optimization of adsorption of methylene blue by activated carbon derived from Pistacia khinjuk.

    PubMed

    Ghaedi, M; Ghaedi, A M; Abdi, F; Roosta, M; Vafaei, A; Asghari, A

    2013-10-01

    In the present study, activated carbon (AC) simply derived from Pistacia khinjuk and characterized using different techniques such as SEM and BET analysis. This new adsorbent was used for methylene blue (MB) adsorption. Fitting the experimental equilibrium data to various isotherm models shows the suitability and applicability of the Langmuir model. The adsorption mechanism and rate of processes was investigated by analyzing time dependency data to conventional kinetic models and it was found that adsorption follow the pseudo-second-order kinetic model. Principle component analysis (PCA) has been used for preprocessing of input data and genetic algorithm optimization have been used for prediction of adsorption of methylene blue using activated carbon derived from P. khinjuk. In our laboratory various activated carbon as sole adsorbent or loaded with various nanoparticles was used for removal of many pollutants (Ghaedi et al., 2012). These results indicate that the small amount of proposed adsorbent (1.0g) is applicable for successful removal of MB (RE>98%) in short time (45min) with high adsorption capacity (48-185mgg(-1)).

  7. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  8. Design of a flexible component gathering algorithm for converting cell-based models to graph representations for use in evolutionary search

    PubMed Central

    2014-01-01

    Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell

  9. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1995-01-01

    Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

  10. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  11. Numerical Sensitivity Analysis of a Composite Impact Absorber

    NASA Astrophysics Data System (ADS)

    Caputo, F.; Lamanna, G.; Scarano, D.; Soprano, A.

    2008-08-01

    This work deals with a numerical investigation on the energy absorbing capability of structural composite components. There are several difficulties associated with the numerical simulation of a composite impact-absorber, such as high geometrical non-linearities, boundary contact conditions, failure criteria, material behaviour; all those aspects make the calibration of numerical models and the evaluation of their sensitivity to the governing geometrical, physical and numerical parameters one of the main objectives of whatever numerical investigation. The last aspect is a very important one for designers in order to make the application of the model to real cases robust from both a physical and a numerical point of view. At first, on the basis of experimental data from literature, a preliminary calibration of the numerical model of a composite impact absorber and then a sensitivity analysis to the variation of the main geometrical and material parameters have been developed, by using explicit finite element algorithms implemented in the Ls-Dyna code.

  12. Local multiplicative Schwarz algorithms for convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  13. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  14. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  15. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    NASA Astrophysics Data System (ADS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  16. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    SciTech Connect

    Razali, Azhani Mohd Abdullah, Jaafar

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  17. A numerical algorithm to evaluate the transient response for a synchronous scanning streak camera using a time-domain Baum-Liu-Tesche equation

    NASA Astrophysics Data System (ADS)

    Pei, Chengquan; Tian, Jinshou; Wu, Shengli; He, Jiai; Liu, Zhen

    2016-10-01

    The transient response is of great influence on the electromagnetic compatibility of synchronous scanning streak cameras (SSSCs). In this paper we propose a numerical method to evaluate the transient response of the scanning deflection plate (SDP). First, we created a simplified circuit model for the SDP used in an SSSC, and then derived the Baum-Liu-Tesche (BLT) equation in the frequency domain. From the frequency-domain BLT equation, its transient counterpart was derived. These parameters, together with the transient-BLT equation, were used to compute the transient load voltage and load current, and then a novel numerical method to fulfill the continuity equation was used. Several numerical simulations were conducted to verify this proposed method. The computed results were then compared with transient responses obtained by a frequency-domain/fast Fourier transform (FFT) method, and the accordance was excellent for highly conducting cables. The benefit of deriving the BLT equation in the time domain is that it may be used with slight modifications to calculate the transient response and the error can be controlled by a computer program. The result showed that the transient voltage was up to 1000 V and the transient current was approximately 10 A, so some protective measures should be taken to improve the electromagnetic compatibility.

  18. Successive binary algebraic reconstruction technique: an algorithm for reconstruction from limited angle and limited number of projections decomposed into individual components.

    PubMed

    Khaled, Alia S; Beck, Thomas J

    2013-01-01

    Relatively high radiation CT techniques are being widely used in diagnostic imaging raising the concerns about cancer risk especially for routine screening of asymptomatic populations. An important strategy for dose reduction is to reduce the number of projections, although doing so with high image quality is technically difficult. We developed an algorithm to reconstruct discrete (limited gray scale) images decomposed into individual tissue types from a small number of projections acquired over a limited view angle. The algorithm was tested using projection simulations from segmented CT scans of different cross sections including mid femur, distal femur and lower leg. It can provide high quality images from as low as 5-7 projections if the skin boundary of the cross section is used as prior information in the reconstruction process, and from 11-13 projections if the skin boundary is unknown.

  19. Introduction to Numerical Methods

    SciTech Connect

    Schoonover, Joseph A.

    2016-06-14

    These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.

  20. Improved Antishock Air-Gap Control Algorithm with Acceleration Feedforward Control for High-Numerical Aperture Near-Field Storage System Using Solid Immersion Lens

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Gon; Shin, Won-Ho; Hwang, Hyun-Woo; Jeong, Jun; Park, Kyoung-Su; Park, No-Cheol; Yang, Hyunseok; Park, Young-Pil; Moo Park, Jin; Son, Do Hyeon; Kyo Seo, Jeong; Choi, In Ho

    2010-08-01

    A near-field storage system using a solid immersion lens (SIL) has been studied as a high-density optical disc drive system. The major goal of this research is to improve the robustness of the air-gap controller for a SIL-based near-field recording (NFR) system against dynamic disturbances, such as external shocks. The servo system is essential in near-field (NF) technology because the nanogap distance between the SIL and the disc is 50 nm or less. Also, the air-gap distance must be maintained without collision between the SIL and the disc to detect a stable gap error and read-out signals when an external shock is applied. Therefore, we propose an improved air-gap control algorithm using only an acceleration feedforward controller (AFC) to maintain the air-gap distance without contact for a 4.48 G at 10 ms shock. Thus, the antishock control performance for the SIL-based NF storage system in the presence of external shocks is markedly improved. Furthermore, to enhance the performance of the antishock air-gap control, we use the AFC with a double disturbance observer and a dead-zone nonlinear controller. As a result, the air-gap distance is maintained without contact for a 6.56 G@10 ms shock.

  1. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    NASA Astrophysics Data System (ADS)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  2. Novel pure component contribution, mean centering of ratio spectra and factor based algorithms for simultaneous resolution and quantification of overlapped spectral signals: An application to recently co-formulated tablets of chlorzoxazone, aceclofenac and paracetamol

    NASA Astrophysics Data System (ADS)

    Toubar, Safaa S.; Hegazy, Maha A.; Elshahed, Mona S.; Helmy, Marwa I.

    2016-06-01

    In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253 nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30 μg mL- 1 for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10 μg mL- 1. The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets.

  3. Numerical methods in control

    NASA Astrophysics Data System (ADS)

    Mehrmann, Volker; Xu, Hongguo

    2000-11-01

    We study classical control problems like pole assignment, stabilization, linear quadratic control and H[infinity] control from a numerical analysis point of view. We present several examples that show the difficulties with classical approaches and suggest reformulations of the problems in a more general framework. We also discuss some new algorithmic approaches.

  4. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  5. A numerical formulation for nonlinear ultrasonic waves propagation in fluids.

    PubMed

    Vanhille, C; Campos-Pozuelo, C

    2004-08-01

    A finite-difference algorithm is developed for analysing the nonlinear propagation of pulsed and harmonic ultrasonic waves in fluid media. The time domain model allows simulations from linear to strongly nonlinear plane waves including weak shock. Effects of absorption are included. All the harmonic components are obtained from only one solving process. The evolution of any original signal can be analysed. The nonlinear solution is obtained by the implicit scheme via a fast linear solver. The numerical model is validated by comparison to analytical data. Numerical experiments are presented and commented. The effect of the initial pulse shape on the evolution of the pressure waveform is especially analysed.

  6. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  7. Robustness of Flexible Systems With Component-Level Uncertainties

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.

    2000-01-01

    Robustness of flexible systems in the presence of model uncertainties at the component level is considered. Specifically, an approach for formulating robustness of flexible systems in the presence of frequency and damping uncertainties at the component level is presented. The synthesis of the components is based on a modifications of a controls-based algorithm for component mode synthesis. The formulation deals first with robustness of synthesized flexible systems. It is then extended to deal with global (non-synthesized ) dynamic models with component-level uncertainties by projecting uncertainties from component levels to system level. A numerical example involving a two-dimensional simulated docking problem is worked out to demonstrate the feasibility of the proposed approach.

  8. Fast Numerical Methods for Stochastic Partial Differential Equations

    DTIC Science & Technology

    2016-04-15

    uncertainty quantification. In the last decade much progress has been made in the construction of numerical algorithms to efficiently solve SPDES with...applicable SPDES with efficient numerical methods. This project is intended to address the numerical analysis as well as algorithm aspects of SPDES. Three...differential equations. Our work contains algorithm constructions, rigorous error analysis, and extensive numerical experiments to demonstrate our algorithm

  9. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    2002-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  10. Inverse transport calculations in optical imaging with subspace optimization algorithms

    NASA Astrophysics Data System (ADS)

    Ding, Tian; Ren, Kui

    2014-09-01

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.

  11. "Recognizing Numerical Constants"

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Craw, James M. (Technical Monitor)

    1995-01-01

    The advent of inexpensive, high performance computer and new efficient algorithms have made possible the automatic recognition of numerically computed constants. In other words, techniques now exist for determining, within certain limits, whether a computed real or complex number can be written as a simple expression involving the classical constants of mathematics. In this presentation, some of the recently discovered techniques for constant recognition, notably integer relation detection algorithms, will be presented. As an application of these methods, the author's recent work in recognizing "Euler sums" will be described in some detail.

  12. Efficient sequential and parallel algorithms for record linkage

    PubMed Central

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837

  13. Computing Steerable Principal Components of a Large Set of Images and Their Rotations

    PubMed Central

    Ponce, Colin; Singer, Amit

    2013-01-01

    We present here an efficient algorithm to compute the Principal Component Analysis (PCA) of a large image set consisting of images and, for each image, the set of its uniform rotations in the plane. We do this by pointing out the block circulant structure of the covariance matrix and utilizing that structure to compute its eigenvectors. We also demonstrate the advantages of this algorithm over similar ones with numerical experiments. Although it is useful in many settings, we illustrate the specific application of the algorithm to the problem of cryo-electron microscopy. PMID:21536533

  14. Computing steerable principal components of a large set of images and their rotations.

    PubMed

    Ponce, Colin; Singer, Amit

    2011-11-01

    We present here an efficient algorithm to compute the Principal Component Analysis (PCA) of a large image set consisting of images and, for each image, the set of its uniform rotations in the plane. We do this by pointing out the block circulant structure of the covariance matrix and utilizing that structure to compute its eigenvectors. We also demonstrate the advantages of this algorithm over similar ones with numerical experiments. Although it is useful in many settings, we illustrate the specific application of the algorithm to the problem of cryo-electron microscopy.

  15. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  16. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  17. A grid interfacing zonal algorithm for three-dimensional transonic flows about aircraft configurations

    NASA Astrophysics Data System (ADS)

    Atta, E. H.; Vadyak, J.

    An efficient grid interfacing zonal algorithm has been developed for computing the transonic flow field about three-dimensional multicomponent configurations. The algorithm uses the full-potential formulation and the fully-implicit approximate factorization scheme (AF2). The flow field solution is computed using a component adaptive grid approach in which separate grids are employed for the individual components in the multicomponent configuration, where each component grid is optimized for a particular geometry. The component grids are allowed to overlap, and flow field information is transmitted from one grid to another through the overlap region. An overlapped-grid scheme is implemented for a wing and a wing/pylon/nacelle configuration. Numerical results show that the present algorithm is stable, accurate, and can be used effectively to compute the flow field about complex configurations.

  18. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  19. Hindi Numerals.

    ERIC Educational Resources Information Center

    Bright, William

    In most languages encountered by linguists, the numerals, considered as a paradigmatic set, constitute a morpho-syntactic problem of only moderate complexity. The Indo-Aryan language family of North India, however, presents a curious contrast. The relatively regular numeral system of Sanskrit, as it has developed historically into the modern…

  20. The TITS Algorithm: A Simple and Robust Method for Calculating Stable Shapes of Axisymmetric Vesicles

    NASA Astrophysics Data System (ADS)

    Lim, Gerald

    2005-03-01

    I have implemented a simple and robust numerical technique for calculating axisymmetric equilibrium shapes of one-component lipid bilayer vesicles. This so-called Tethered Infinitesimal Tori and Spheres (TITS) Algorithm gives shapes that are automatically stable with respect to axisymmetric perturbations. The latest version of this algorithm can, but is not restricted to, impose constraints on any of three geometrical quantities: the area, volume and pole-to-pole distance (in the case of tether formation). In this talk, I will introduce the basic principles of the TITS Algorithm and demonstrate its versatility through a few example shape calculations involving the Helfrich and Area Difference Elasticity bending free energies.

  1. Computational and performance aspects of PCA-based face-recognition algorithms.

    PubMed

    Moon, H; Phillips, P J

    2001-01-01

    Algorithms based on principal component analysis (PCA) form the basis of numerous studies in the psychological and algorithmic face-recognition literature. PCA is a statistical technique and its incorporation into a face-recognition algorithm requires numerous design decisions. We explicitly state the design decisions by introducing a generic modular PCA-algorithm. This allows us to investigate these decisions, including those not documented in the literature. We experimented with different implementations of each module, and evaluated the different implementations using the September 1996 FERET evaluation protocol (the de facto standard for evaluating face-recognition algorithms). We experimented with (i) changing the illumination normalization procedure; (ii) studying effects on algorithm performance of compressing images with JPEG and wavelet compression algorithms; (iii) varying the number of eigenvectors in the representation; and (iv) changing the similarity measure in the classification process. We performed two experiments. In the first experiment, we obtained performance results on the standard September 1996 FERET large-gallery image sets. In the second experiment, we examined the variability in algorithm performance on different sets of facial images. The study was performed on 100 randomly generated image sets (galleries) of the same size. Our two most significant results are (i) changing the similarity measure produced the greatest change in performance, and (ii) that difference in performance of +/- 10% is needed to distinguish between algorithms.

  2. On the numeric integration of dynamic attitude equations

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Yan, Y.; Grossman, Robert

    1992-01-01

    We describe new types of numerical integration algorithms developed by the authors. The main aim of the algorithms is to numerically integrate differential equations which evolve on geometric objects, such as the rotation group. The algorithms provide iterates which lie on the prescribed geometric object, either exactly, or to some prescribed accuracy, independent of the order of the algorithm. This paper describes applications of these algorithms to the evolution of the attitude of a rigid body.

  3. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  4. Frontiers in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Evans, Charles R.; Finn, Lee S.; Hobill, David W.

    2011-06-01

    Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics

  5. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  6. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  7. CO component estimation based on the independent component analysis

    SciTech Connect

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  8. Cold-standby redundancy allocation problem with degrading components

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Xiong, Junlin; Xie, Min

    2015-11-01

    Components in cold-standby state are usually assumed to be as good as new when they are activated. However, even in a standby environment, the components will suffer from performance degradation. This article presents a study of a redundancy allocation problem (RAP) for cold-standby systems with degrading components. The objective of the RAP is to determine an optimal design configuration of components to maximize system reliability subject to system resource constraints (e.g. cost, weight). As in most cases, it is not possible to obtain a closed-form expression for this problem, and hence, an approximated objective function is presented. A genetic algorithm with dual mutation is developed to solve such a constrained optimization problem. Finally, a numerical example is given to illustrate the proposed solution methodology.

  9. Computational Algorithms for Device-Circuit Coupling

    SciTech Connect

    KEITER, ERIC R.; HUTCHINSON, SCOTT A.; HOEKSTRA, ROBERT J.; RANKIN, ERIC LAMONT; RUSSO, THOMAS V.; WATERS, LON J.

    2003-01-01

    Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. Similarly, device-scale simulation tools (e.g., DaVinci) are commonly used in the design of individual semiconductor components. Some problems, such as single-event upset (SEU), require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (sometimes conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the partial differential equations (PDE) device, while optimizing the numerics for both.

  10. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  11. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  12. Methods of information theory and algorithmic complexity for network biology.

    PubMed

    Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper

    2016-03-01

    We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdös-Rényi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity.

  13. Numerical anomalies mimicking physical effects

    SciTech Connect

    Menikoff, R.

    1995-09-01

    Numerical simulations of flows with shock waves typically use finite-difference shock-capturing algorithms. These algorithms give a shock a numerical width in order to generate the entropy increase that must occur across a shock wave. For algorithms in conservation form, steady-state shock waves are insensitive to the numerical dissipation because of the Hugoniot jump conditions. However, localized numerical errors occur when shock waves interact. Examples are the ``excess wall heating`` in the Noh problem (shock reflected from rigid wall), errors when a shock impacts a material interface or an abrupt change in mesh spacing, and the start-up error from initializing a shock as a discontinuity. This class of anomalies can be explained by the entropy generation that occurs in the transient flow when a shock profile is formed or changed. The entropy error is localized spatially but under mesh refinement does not decrease in magnitude. Similar effects have been observed in shock tube experiments with partly dispersed shock waves. In this case, the shock has a physical width due to a relaxation process. An entropy anomaly from a transient shock interaction is inherent in the structure of the conservation equations for fluid flow. The anomaly can be expected to occur whenever heat conduction can be neglected and a shock wave has a non-zero width, whether the width is physical or numerical. Thus, the numerical anomaly from an artificial shock width mimics a real physical effect.

  14. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  15. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  16. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  17. Numerical analysis of strongly nonlinear extensional vibrations in elastic rods.

    PubMed

    Vanhille, Christian; Campos-Pozuelo, Cleofé

    2007-01-01

    In the framework of transduction, nondestructive testing, and nonlinear acoustic characterization, this article presents the analysis of strongly nonlinear vibrations by means of an original numerical algorithm. In acoustic and transducer applications in extreme working conditions, such as the ones induced by the generation of high-power ultrasound, the analysis of nonlinear ultrasonic vibrations is fundamental. Also, the excitation and analysis of nonlinear vibrations is an emergent technique in nonlinear characterization for damage detection. A third-order evolution equation is derived and numerically solved for extensional waves in isotropic dissipative media. A nine-constant theory of elasticity for isotropic solids is constructed, and the nonlinearity parameters corresponding to extensional waves are proposed. The nonlinear differential equation is solved by using a new numerical algorithm working in the time domain. The finite-difference numerical method proposed is implicit and only requires the solution of a linear set of equations at each time step. The model allows the analysis of strongly nonlinear, one-dimensional vibrations and can be used for prediction as well as characterization. Vibration waveforms are calculated at different points, and results are compared for different excitation levels and boundary conditions. Amplitude distributions along the rod axis for every harmonic component also are evaluated. Special attention is given to the study of high-amplitude damping of vibrations by means of several simulations. Simulations are performed for amplitudes ranging from linear to nonlinear and weak shock.

  18. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  19. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  20. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  1. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  2. Numerical Optimization

    DTIC Science & Technology

    1992-12-01

    fisica matematica . ABSTRACT - We consider a new method for the numerical solution both of non- linear systems of equations and of cornplementauity... Matematica , Serie V11 Volume 9 , Roma (1989), 521-543 An Inexact Continuous Method for the Solution of Large Systems of Equations and Complementarity...34 - 00185 Roma - Italy APPENDIX 2 A Quadratically Convergent Method for Unear Programming’ Stefano Herzel Dipartimento di Matematica -G. Castelnuovo

  3. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  4. Fast Steerable Principal Component Analysis.

    PubMed

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-03-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL(3) + L(4)), while existing algorithms take O(nL(4)). The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.

  5. Inclusive Flavour Tagging Algorithm

    NASA Astrophysics Data System (ADS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-10-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.

  6. COMPARING NUMERICAL METHODS FOR ISOTHERMAL MAGNETIZED SUPERSONIC TURBULENCE

    SciTech Connect

    Kritsuk, Alexei G.; Collins, David; Norman, Michael L.; Xu Hao E-mail: dccollins@lanl.gov

    2011-08-10

    Many astrophysical applications involve magnetized turbulent flows with shock waves. Ab initio star formation simulations require a robust representation of supersonic turbulence in molecular clouds on a wide range of scales imposing stringent demands on the quality of numerical algorithms. We employ simulations of supersonic super-Alfvenic turbulence decay as a benchmark test problem to assess and compare the performance of nine popular astrophysical MHD methods actively used to model star formation. The set of nine codes includes: ENZO, FLASH, KT-MHD, LL-MHD, PLUTO, PPML, RAMSES, STAGGER, and ZEUS. These applications employ a variety of numerical approaches, including both split and unsplit, finite difference and finite volume, divergence preserving and divergence cleaning, a variety of Riemann solvers, and a range of spatial reconstruction and time integration techniques. We present a comprehensive set of statistical measures designed to quantify the effects of numerical dissipation in these MHD solvers. We compare power spectra for basic fields to determine the effective spectral bandwidth of the methods and rank them based on their relative effective Reynolds numbers. We also compare numerical dissipation for solenoidal and dilatational velocity components to check for possible impacts of the numerics on small-scale density statistics. Finally, we discuss the convergence of various characteristics for the turbulence decay test and the impact of various components of numerical schemes on the accuracy of solutions. The nine codes gave qualitatively the same results, implying that they are all performing reasonably well and are useful for scientific applications. We show that the best performing codes employ a consistently high order of accuracy for spatial reconstruction of the evolved fields, transverse gradient interpolation, conservation law update step, and Lorentz force computation. The best results are achieved with divergence-free evolution of the

  7. Robust and discriminating method for face recognition based on correlation technique and independent component analysis model.

    PubMed

    Alfalou, A; Brosseau, C

    2011-03-01

    We demonstrate a novel technique for face recognition. Our approach relies on the performances of a strongly discriminating optical correlation method along with the robustness of the independent component analysis (ICA) model. Simulations were performed to illustrate how this algorithm can identify a face with images from the Pointing Head Pose Image Database. While maintaining algorithmic simplicity, this approach based on ICA representation significantly increases the true recognition rate compared to that obtained using our previously developed all-numerical ICA identity recognition method and another method based on optical correlation and a standard composite filter.

  8. Empirical Studies of the Value of Algorithm Animation in Algorithm Understanding

    DTIC Science & Technology

    1993-08-01

    A series of studies is presented using algorithm animation to teach computer algorithms . These studies are organized into three components: eliciting...lecture with experimenter-preprepared data sets. This work has implications for the design and use of animated algorithms in teaching computer algorithms and

  9. (n, N) type maintenance policy for multi-component systems with failure interactions

    NASA Astrophysics Data System (ADS)

    Zhang, Zhuoqi; Wu, Su; Li, Binfeng; Lee, Seungchul

    2015-04-01

    This paper studies maintenance policies for multi-component systems in which failure interactions and opportunistic maintenance (OM) involve. This maintenance problem can be formulated as a Markov decision process (MDP). However, since an action set and state space in MDP exponentially expand as the number of components increase, traditional approaches are computationally intractable. To deal with curse of dimensionality, we decompose such a multi-component system into mutually influential single-component systems. Each single-component system is formulated as an MDP with the objective of minimising its long-run average maintenance cost. Under some reasonable assumptions, we prove the existence of the optimal (n, N) type policy for a single-component system. An algorithm to obtain the optimal (n, N) type policy is also proposed. Based on the proposed algorithm, we develop an iterative approximation algorithm to obtain an acceptable maintenance policy for a multi-component system. Numerical examples find that failure interactions and OM pose significant effects on a maintenance policy.

  10. Numerical inversion of finite Toeplitz matrices and vector Toeplitz matrices

    NASA Technical Reports Server (NTRS)

    Bareiss, E. H.

    1969-01-01

    Numerical technique increases the efficiencies of the numerical methods involving Toeplitz matrices by reducing the number of multiplications required by an N-order Toeplitz matrix from N-cubed to N-squared multiplications. Some efficient algorithms are given.

  11. Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography.

    PubMed

    Lim, JooWon; Lee, KyeoReh; Jin, Kyong Hwan; Shin, Seungwoo; Lee, SeoEun; Park, YongKeun; Ye, Jong Chul

    2015-06-29

    In optical tomography, there exist certain spatial frequency components that cannot be measured due to the limited projection angles imposed by the numerical aperture of objective lenses. This limitation, often called as the missing cone problem, causes the under-estimation of refractive index (RI) values in tomograms and results in severe elongations of RI distributions along the optical axis. To address this missing cone problem, several iterative reconstruction algorithms have been introduced exploiting prior knowledge such as positivity in RI differences or edges of samples. In this paper, various existing iterative reconstruction algorithms are systematically compared for mitigating the missing cone problem in optical diffraction tomography. In particular, three representative regularization schemes, edge preserving, total variation regularization, and the Gerchberg-Papoulis algorithm, were numerically and experimentally evaluated using spherical beads as well as real biological samples; human red blood cells and hepatocyte cells. Our work will provide important guidelines for choosing the appropriate regularization in ODT.

  12. Direct dynamics simulations using Hessian-based predictor-corrector integration algorithms.

    PubMed

    Lourderaj, Upakarasamy; Song, Kihyung; Windus, Theresa L; Zhuang, Yu; Hase, William L

    2007-01-28

    In previous research [J. Chem. Phys. 111, 3800 (1999)] a Hessian-based integration algorithm was derived for performing direct dynamics simulations. In the work presented here, improvements to this algorithm are described. The algorithm has a predictor step based on a local second-order Taylor expansion of the potential in Cartesian coordinates, within a trust radius, and a fifth-order correction to this predicted trajectory. The current algorithm determines the predicted trajectory in Cartesian coordinates, instead of the instantaneous normal mode coordinates used previously, to ensure angular momentum conservation. For the previous algorithm the corrected step was evaluated in rotated Cartesian coordinates. Since the local potential expanded in Cartesian coordinates is not invariant to rotation, the constants of motion are not necessarily conserved during the corrector step. An approximate correction to this shortcoming was made by projecting translation and rotation out of the rotated coordinates. For the current algorithm unrotated Cartesian coordinates are used for the corrected step to assure the constants of motion are conserved. An algorithm is proposed for updating the trust radius to enhance the accuracy and efficiency of the numerical integration. This modified Hessian-based integration algorithm, with its new components, has been implemented into the VENUS/NWChem software package and compared with the velocity-Verlet algorithm for the H(2)CO-->H(2)+CO, O(3)+C(3)H(6), and F(-)+CH(3)OOH chemical reactions.

  13. Visualization of a Numerical Simulation of GW 150914

    NASA Astrophysics Data System (ADS)

    Rosato, Nicole; Healy, James; Lousto, Carlos

    2017-01-01

    We present an analysis of a simulation displaying apparent horizon curvature and radiation emitted from a binary black hole system modeling GW-150914 during merger. The simulation follows the system from seven orbits prior to merger to the resultant Kerr black hole. Horizon curvature was calculated using a mean curvature flow algorithm. Radiation data was visualized via the Ψ4 component of the Weyl scalars, which were determined using a numerical quasi-Kinnersley method. We also present a comparative study of the differences in quasi-Kinnersley and PsiKadelia tetrads to construct Ψ4. The analysis is displayed on a movie generated from these numerical results, and was done using VisIt software from Lawrence Livermore National Laboratory. This simulation and analysis gives more insight into the merger of the system GW 150914.

  14. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. I - The dynamics of time discretization and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1991-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  15. Dynamical approach study of spurious steady-state numerical solutions of nonlinear differential equations. Part 1: The ODE connection and its implications for algorithm development in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1990-01-01

    Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.

  16. Brain components

    MedlinePlus Videos and Cool Tools

    The brain is composed of more than a thousand billion neurons. Specific groups of them, working in concert, provide ... of information. The 3 major components of the brain are the cerebrum, cerebellum, and brain stem. The ...

  17. Revised numerical wrapper for PIES code

    NASA Astrophysics Data System (ADS)

    Raburn, Daniel; Reiman, Allan; Monticello, Donald

    2015-11-01

    A revised external numerical wrapper has been developed for the Princeton Iterative Equilibrium Solver (PIES code), which is capable of calculating 3D MHD equilibria with islands. The numerical wrapper has been demonstrated to greatly improve the rate of convergence in numerous cases corresponding to equilibria in the TFTR device where magnetic islands are present. The numerical wrapper makes use of a Jacobian-free Newton-Krylov solver along with adaptive preconditioning and a sophisticated subspace-restricted Levenberg-Marquardt backtracking algorithm. The details of the numerical wrapper and several sample results are presented.

  18. Ab initio two-component Ehrenfest dynamics

    SciTech Connect

    Ding, Feizhi; Goings, Joshua J.; Liu, Hongbin; Lingerfelt, David B.; Li, Xiaosong

    2015-09-21

    We present an ab initio two-component Ehrenfest-based mixed quantum/classical molecular dynamics method to describe the effect of nuclear motion on the electron spin dynamics (and vice versa) in molecular systems. The two-component time-dependent non-collinear density functional theory is used for the propagation of spin-polarized electrons while the nuclei are treated classically. We use a three-time-step algorithm for the numerical integration of the coupled equations of motion, namely, the velocity Verlet for nuclear motion, the nuclear-position-dependent midpoint Fock update, and the modified midpoint and unitary transformation method for electronic propagation. As a test case, the method is applied to the dissociation of H{sub 2} and O{sub 2}. In contrast to conventional Ehrenfest dynamics, this two-component approach provides a first principles description of the dynamics of non-collinear (e.g., spin-frustrated) magnetic materials, as well as the proper description of spin-state crossover, spin-rotation, and spin-flip dynamics by relaxing the constraint on spin configuration. This method also holds potential for applications to spin transport in molecular or even nanoscale magnetic devices.

  19. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  20. Numerical recipes, The art of scientific computing

    SciTech Connect

    Press, W.H.; Flannery, B.P.; Teukolsky, S.; Vetterling, W.T.

    1986-01-01

    Seventeen chapter are divided into 130 sections provide a self-contained treatment that derives, critically discusses, and actually implements over 200 of the most important numerical algorithms for scientific work. Each algorithm is presented both in FORTRAN and Pascal, with the source programs printed in the book itself. The scope of Numerical Recipes ranges from standard areas of numerical analysis (linear algebra, differential equations, roots) through subjects useful to signal processing (Fourier methods, filtering), data analysis (least squares, robust fitting, statistical functions), simulation (random deviates and Monte Carlo). The routines themselves are available for a wide variety of different computers, from personal computers to mainframes, and are largely portable among different machines.

  1. Programming the gradient projection algorithm

    NASA Technical Reports Server (NTRS)

    Hargrove, A.

    1983-01-01

    The gradient projection method of numerical optimization which is applied to problems having linear constraints but nonlinear objective functions is described and analyzed. The algorithm is found to be efficient and thorough for small systems, but requires the addition of auxiliary methods and programming for large scale systems with severe nonlinearities. In order to verify the theoretical results a digital computer is used to simulate the algorithm.

  2. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  3. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  4. Single-shot acquisition of optical direct and global components using single coded pattern projection

    NASA Astrophysics Data System (ADS)

    Ando, Takamasa; Horisaki, Ryoichi; Nakamura, Tomoya; Tanida, Jun

    2015-04-01

    We present a single-shot approach for separating optical direct and global components from an object. The former component is caused by direct illumination that travels from a light source to a point on the object and goes back to a camera directly. The latter one is caused by indirect illumination that travels from the light source to a point on the object through other points and goes back to the camera, such as multi-path reflection, diffusion, and scattering, or from another unintended light source, such as ambient illumination. In this method, the direct component is modulated by a single coded pattern from a projector. The modulated direct and un-modulated global components are integrated on an image sensor, which captures a single image. These two components are separated from the single captured image with a numerical algorithm employing a sparsity constraint. Ambient light separation and descattering based on the proposed scheme are experimentally demonstrated.

  5. Multi-component Cahn-Hilliard system with different boundary conditions in complex domains

    NASA Astrophysics Data System (ADS)

    Li, Yibao; Choi, Jung-Il; Kim, Junseok

    2016-10-01

    We propose an efficient phase-field model for multi-component Cahn-Hilliard (CH) systems in complex domains. The original multi-component Cahn-Hilliard system with a fixed phase is modified in order to make it suitable for complex domains in the Cartesian grid, along with contact angle or no mass flow boundary conditions on the complex boundaries. The proposed method uses a practically unconditionally gradient stable nonlinear splitting numerical scheme. Further, a nonlinear full approximation storage multigrid algorithm is used for solving semi-implicit formulations of the multi-component CH system, incorporated with an adaptive mesh refinement technique. The robustness of the proposed method is validated through various numerical simulations including multi-phase separations via spinodal decomposition, equilibrium contact angle problems, and multi-phase flows with a background velocity field in complex domains.

  6. A genetic-algorithm-based method to find unitary transformations for any desired quantum computation and application to a one-bit oracle decision problem

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Yoo, Seokwon

    2014-12-01

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the "genetic parameter vector" of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the corresponding quantum algorithms for a realistic problem, the one-bit oracle decision problem, or the often-called Deutsch problem. By numerical simulations, we can faithfully find the appropriate unitary transformations to solve the problem by using our method. We analyze the quantum algorithms identified by the found unitary transformations and generalize the variant models of the original Deutsch's algorithm.

  7. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  8. Force-Control Algorithm for Surface Sampling

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Quadrelli, Marco B.; Phan, Linh

    2008-01-01

    A G-FCON algorithm is designed for small-body surface sampling. It has a linearization component and a feedback component to enhance performance. The algorithm regulates the contact force between the tip of a robotic arm attached to a spacecraft and a surface during sampling.

  9. Scientific Software Component Technology

    SciTech Connect

    Kohn, S.; Dykman, N.; Kumfert, G.; Smolinski, B.

    2000-02-16

    We are developing new software component technology for high-performance parallel scientific computing to address issues of complexity, re-use, and interoperability for laboratory software. Component technology enables cross-project code re-use, reduces software development costs, and provides additional simulation capabilities for massively parallel laboratory application codes. The success of our approach will be measured by its impact on DOE mathematical and scientific software efforts. Thus, we are collaborating closely with library developers and application scientists in the Common Component Architecture forum, the Equation Solver Interface forum, and other DOE mathematical software groups to gather requirements, write and adopt a variety of design specifications, and develop demonstration projects to validate our approach. Numerical simulation is essential to the science mission at the laboratory. However, it is becoming increasingly difficult to manage the complexity of modern simulation software. Computational scientists develop complex, three-dimensional, massively parallel, full-physics simulations that require the integration of diverse software packages written by outside development teams. Currently, the integration of a new software package, such as a new linear solver library, can require several months of effort. Current industry component technologies such as CORBA, JavaBeans, and COM have all been used successfully in the business domain to reduce software development costs and increase software quality. However, these existing industry component infrastructures will not scale to support massively parallel applications in science and engineering. In particular, they do not address issues related to high-performance parallel computing on ASCI-class machines, such as fast in-process connections between components, language interoperability for scientific languages such as Fortran, parallel data redistribution between components, and massively

  10. Cuba: Multidimensional numerical integration library

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  11. Battery component

    SciTech Connect

    Goebel, F.; Batson, D.C.; Miserendino, A.J.; Boyle, G.

    1988-03-15

    A mechanical component for reserve type electrochemical batteries having cylindrical porous members is described comprising a disc having: (i) circular grooves in one flat side for accepting the porous members; and (ii) at least one radial channel in the opposite flat side in fluid communication with the grooves.

  12. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  13. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  14. Statistical algorithms for a comprehensive test ban treaty discrimination framework

    SciTech Connect

    Foote, N.D.; Anderson, D.N.; Higbee, K.T.; Miller, N.E.; Redgate, T.; Rohay, A.C.; Hagedorn, D.N.

    1996-10-01

    Seismic discrimination is the process of identifying a candidate seismic event as an earthquake or explosion using information from seismic waveform features (seismic discriminants). In the CTBT setting, low energy seismic activity must be detected and identified. A defensible CTBT discrimination decision requires an understanding of false-negative (declaring an event to be an earthquake given it is an explosion) and false-position (declaring an event to be an explosion given it is an earthquake) rates. These rates are derived from a statistical discrimination framework. A discrimination framework can be as simple as a single statistical algorithm or it can be a mathematical construct that integrates many different types of statistical algorithms and CTBT technologies. In either case, the result is the identification of an event and the numerical assessment of the accuracy of an identification, that is, false-negative and false-positive rates. In Anderson et al., eight statistical discrimination algorithms are evaluated relative to their ability to give results that effectively contribute to a decision process and to be interpretable with physical (seismic) theory. These algorithms can be discrimination frameworks individually or components of a larger framework. The eight algorithms are linear discrimination (LDA), quadratic discrimination (QDA), variably regularized discrimination (VRDA), flexible discrimination (FDA), logistic discrimination, K-th nearest neighbor (KNN), kernel discrimination, and classification and regression trees (CART). In this report, the performance of these eight algorithms, as applied to regional seismic data, is documented. Based on the findings in Anderson et al. and this analysis: CART is an appropriate algorithm for an automated CTBT setting.

  15. Scientific Component Technology Initiative

    SciTech Connect

    Kohn, S; Bosl, B; Dahlgren, T; Kumfert, G; Smith, S

    2003-02-07

    The laboratory has invested a significant amount of resources towards the development of high-performance scientific simulation software, including numerical libraries, visualization, steering, software frameworks, and physics packages. Unfortunately, because this software was not designed for interoperability and re-use, it is often difficult to share these sophisticated software packages among applications due to differences in implementation language, programming style, or calling interfaces. This LDRD Strategic Initiative investigated and developed software component technology for high-performance parallel scientific computing to address problems of complexity, re-use, and interoperability for laboratory software. Component technology is an extension of scripting and object-oriented software development techniques that specifically focuses on the needs of software interoperability. Component approaches based on CORBA, COM, and Java technologies are widely used in industry; however, they do not support massively parallel applications in science and engineering. Our research focused on the unique requirements of scientific computing on ASCI-class machines, such as fast in-process connections among components, language interoperability for scientific languages, and data distribution support for massively parallel SPMD components.

  16. Modular algorithm concept evaluation tool (MACET) sensor fusion algorithm testbed

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Williams, Bradford D.; Talele, Sunjay E.; Amphay, Sengvieng A.

    1995-07-01

    Target acquisition in a high clutter environment in all-weather at any time of day represents a much needed capability for the air-to-surface strike mission. A considerable amount of the research at the Armament Directorate at Wright Laboratory, Advanced Guidance Division WL/MNG, has been devoted to exploring various seeker technologies, including multi-spectral sensor fusion, that may yield a cost efficient system with these capabilities. Critical elements of any such seekers are the autonomous target acquisition and tracking algorithms. These algorithms allow the weapon system to operate independently and accurately in realistic battlefield scenarios. In order to assess the performance of the multi-spectral sensor fusion algorithms being produced as part of the seeker technology development programs, the Munition Processing Technology Branch of WL/MN is developing an algorithm testbed. This testbed consists of the Irma signature prediction model, data analysis workstations, such as the TABILS Analysis and Management System (TAMS), and the Modular Algorithm Concept Evaluation Tool (MACET) algorithm workstation. All three of these components are being enhanced to accommodate multi-spectral sensor fusion systems. MACET is being developed to provide a graphical interface driven simulation by which to quickly configure algorithm components and conduct performance evaluations. MACET is being developed incrementally with each release providing an additional channel of operation. To date MACET 1.0, a passive IR algorithm environment, has been delivered. The second release, MACET 1.1 is presented in this paper using the MMW/IR data from the Advanced Autonomous Dual Mode Seeker (AADMS) captive flight demonstration. Once completed, the delivered software from past algorithm development efforts will be converted to the MACET library format, thereby providing an on-line database of the algorithm research conducted to date.

  17. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  18. New pole placement algorithm - Polynomial matrix approach

    NASA Technical Reports Server (NTRS)

    Shafai, B.; Keel, L. H.

    1990-01-01

    A simple and direct pole-placement algorithm is introduced for dynamical systems having a block companion matrix A. The algorithm utilizes well-established properties of matrix polynomials. Pole placement is achieved by appropriately assigning coefficient matrices of the corresponding matrix polynomial. This involves only matrix additions and multiplications without requiring matrix inversion. A numerical example is given for the purpose of illustration.

  19. Alocomotino Control Algorithm for Robotic Linkage Systems

    SciTech Connect

    Dohner, Jeffrey L.

    2016-10-01

    This dissertation describes the development of a control algorithm that transitions a robotic linkage system between stabilized states producing responsive locomotion. The developed algorithm is demonstrated using a simple robotic construction consisting of a few links with actuation and sensing at each joint. Numerical and experimental validation is presented.

  20. Evolving evolutionary algorithms using linear genetic programming.

    PubMed

    Oltean, Mihai

    2005-01-01

    A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.

  1. Numerical Continuation of Hamiltonian Relative Periodic Orbits

    NASA Astrophysics Data System (ADS)

    Wulff, Claudia; Schebesch, Andreas

    2008-08-01

    The bifurcation theory and numerics of periodic orbits of general dynamical systems is well developed, and in recent years, there has been rapid progress in the development of a bifurcation theory for dynamical systems with structure, such as symmetry or symplecticity. But as yet, there are few results on the numerical computation of those bifurcations. The methods we present in this paper are a first step toward a systematic numerical analysis of generic bifurcations of Hamiltonian symmetric periodic orbits and relative periodic orbits (RPOs). First, we show how to numerically exploit spatio-temporal symmetries of Hamiltonian periodic orbits. Then we describe a general method for the numerical computation of RPOs persisting from periodic orbits in a symmetry breaking bifurcation. Finally, we present an algorithm for the numerical continuation of non-degenerate Hamiltonian relative periodic orbits with regular drift-momentum pair. Our path following algorithm is based on a multiple shooting algorithm for the numerical computation of periodic orbits via an adaptive Poincaré section and a tangential continuation method with implicit reparametrization. We apply our methods to continue the famous figure eight choreography of the three-body system. We find a relative period doubling bifurcation of the planar rotating eight family and compute the rotating choreographies bifurcating from it.

  2. Reliable numerical computation in an optimal output-feedback design

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.

  3. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  4. Numerical taxonomy on data: Experimental results

    SciTech Connect

    Cohen, J.; Farach, M.

    1997-12-01

    The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.

  5. Unification of algorithms for minimum mode optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Yi; Xiao, Penghao; Henkelman, Graeme

    2014-01-01

    Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.

  6. Unification of algorithms for minimum mode optimization.

    PubMed

    Zeng, Yi; Xiao, Penghao; Henkelman, Graeme

    2014-01-28

    Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.

  7. Operator induced multigrid algorithms using semirefinement

    NASA Technical Reports Server (NTRS)

    Decker, Naomi; Vanrosendale, John

    1989-01-01

    A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two and three dimensional model problems are presented, together with a two level analysis explaining these results.

  8. Enhancement of event related potentials by iterative restoration algorithms

    NASA Astrophysics Data System (ADS)

    Pomalaza-Raez, Carlos A.; McGillem, Clare D.

    1986-12-01

    An iterative procedure for the restoration of event related potentials (ERP) is proposed and implemented. The method makes use of assumed or measured statistical information about latency variations in the individual ERP components. The signal model used for the restoration algorithm consists of a time-varying linear distortion and a positivity/negativity constraint. Additional preprocessing in the form of low-pass filtering is needed in order to mitigate the effects of additive noise. Numerical results obtained with real data show clearly the presence of enhanced and regenerated components in the restored ERP's. The procedure is easy to implement which makes it convenient when compared to other proposed techniques for the restoration of ERP signals.

  9. Four-stage computational technology with adaptive numerical methods for computational aerodynamics

    NASA Astrophysics Data System (ADS)

    Shaydurov, V.; Liu, T.; Zheng, Z.

    2012-10-01

    Computational aerodynamics is a key technology in aircraft design which is ahead of physical experiment and complements it. Of course all three components of computational modeling are actively developed: mathematical models of real aerodynamic processes, numerical algorithms, and high-performance computing. The most impressive progress has been made in the field of computing, though with a considerable complication of computer architecture. Numerical algorithms are developed more conservative. More precisely, they are offered and theoretically justified for more simple mathematical problems. Nevertheless, computational mathematics now has amassed a whole palette of numerical algorithms that can provide acceptable accuracy and interface between modern mathematical models in aerodynamics and high-performance computers. A significant step in this direction was the European Project ADIGMA whose positive experience will be used in International Project TRISTAM for further movement in the field of computational technologies for aerodynamics. This paper gives a general overview of objectives and approaches intended to use and a description of the recommended four-stage computer technology.

  10. Recent advances in numerical PDEs

    NASA Astrophysics Data System (ADS)

    Zuev, Julia Michelle

    In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the

  11. Manufacturing complex silica aerogel target components

    SciTech Connect

    Defriend Obrey, Kimberly Ann; Day, Robert D; Espinoza, Brent F; Hatch, Doug; Patterson, Brian M; Feng, Shihai

    2008-01-01

    Aerogel is a material used in numerous components in High Energy Density Physics targets. In the past these components were molded into the proper shapes. Artifacts left in the parts from the molding process, such as contour irregularities from shrinkage and density gradients caused by the skin, have caused LANL to pursue machining as a way to make the components.

  12. Perspectives in numerical astrophysics:

    NASA Astrophysics Data System (ADS)

    Reverdy, V.

    2016-12-01

    In this discussion paper, we investigate the current and future status of numerical astrophysics and highlight key questions concerning the transition to the exascale era. We first discuss the fact that one of the main motivation behind high performance simulations should not be the reproduction of observational or experimental data, but the understanding of the emergence of complexity from fundamental laws. This motivation is put into perspective regarding the quest for more computational power and we argue that extra computational resources can be used to gain in abstraction. Then, the readiness level of present-day simulation codes in regard to upcoming exascale architecture is examined and two major challenges are raised concerning both the central role of data movement for performances and the growing complexity of codes. Software architecture is finally presented as a key component to make the most of upcoming architectures while solving original physics problems.

  13. In Praise of Numerical Computation

    NASA Astrophysics Data System (ADS)

    Yap, Chee K.

    Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.

  14. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  15. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  16. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  17. Principle component analysis for radiotracer signal separation.

    PubMed

    Kasban, H; Arafa, H; Elaraby, S M S

    2016-06-01

    Radiotracers can be used in several industrial applications by injecting the radiotracer into the industrial system and monitoring the radiation using radiation detectors for obtaining signals. These signals are analyzed to obtain indications about what is happening within the system or to determine the problems that may be present in the system. For multi-phase system analysis, more than one radiotracer is used and the result is a mixture of radiotracers signals. The problem is in such cases is how to separate these signals from each other. The paper presents a proposed method based on Principle Component Analysis (PCA) for separating mixed two radiotracer signals from each other. Two different radiotracers (Technetium-99m (Tc(99m)) and Barium-137m (Ba(137m))) were injected into a physical model for simulation of chemical reactor (PMSCR-MK2) for obtaining the radiotracer signals using radiation detectors and Data Acquisition System (DAS). The radiotracer signals are mixed and signal processing steps are performed include background correction and signal de-noising, then applying the signal separation algorithms. Three separation algorithms have been carried out; time domain based separation algorithm, Independent Component Analysis (ICA) based separation algorithm, and Principal Components Analysis (PCA) based separation algorithm. The results proved the superiority of the PCA based separation algorithm to the other based separation algorithm, and PCA based separation algorithm and the signal processing steps gives a considerable improvement of the separation process.

  18. Disruptive Innovation in Numerical Hydrodynamics

    SciTech Connect

    Waltz, Jacob I.

    2012-09-06

    We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.

  19. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Development of Improved Algorithms and Multiscale...a wide range of scales through use of accurate numerical methods and high- performance computational algorithms . The tool will be applied to study...dissipation. OBJECTIVES The primary objective is to enhance the capabilities of the SUNTANS model through development of algorithms to study

  20. An algorithm for the automatic synchronization of Omega receivers

    NASA Technical Reports Server (NTRS)

    Stonestreet, W. M.; Marzetta, T. L.

    1977-01-01

    The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.

  1. Approximate learning algorithm in Boltzmann machines.

    PubMed

    Yasuda, Muneki; Tanaka, Kazuyuki

    2009-11-01

    Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.

  2. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  3. Quantitative interpretation of mineral hyperspectral images based on principal component analysis and independent component analysis methods.

    PubMed

    Jiang, Xiping; Jiang, Yu; Wu, Fang; Wu, Fenghuang

    2014-01-01

    Interpretation of mineral hyperspectral images provides large amounts of high-dimensional data, which is often complicated by mixed pixels. The quantitative interpretation of hyperspectral images is known to be extremely difficult when three types of information are unknown, namely, the number of pure pixels, the spectrum of pure pixels, and the mixing matrix. The problem is made even more complex by the disturbance of noise. The key to interpreting abstract mineral component information, i.e., pixel unmixing and abundance inversion, is how to effectively reduce noise, dimension, and redundancy. A three-step procedure is developed in this study for quantitative interpretation of hyperspectral images. First, the principal component analysis (PCA) method can be used to process the pixel spectrum matrix and keep characteristic vectors with larger eigenvalues. This can effectively reduce the noise and redundancy, which facilitates the abstraction of major component information. Second, the independent component analysis (ICA) method can be used to identify and unmix the pixels based on the linear mixed model. Third, the pure-pixel spectrums can be normalized for abundance inversion, which gives the abundance of each pure pixel. In numerical experiments, both simulation data and actual data were used to demonstrate the performance of our three-step procedure. Under simulation data, the results of our procedure were compared with theoretical values. Under the actual data measured from core hyperspectral images, the results obtained through our algorithm are compared with those of similar software (Mineral Spectral Analysis 1.0, Nanjing Institute of Geology and Mineral Resources). The comparisons show that our method is effective and can provide reference for quantitative interpretation of hyperspectral images.

  4. Numerical pole assignment by eigenvalue Jacobian inversion

    NASA Technical Reports Server (NTRS)

    Sevaston, George E.

    1986-01-01

    A numerical procedure for solving the linear pole placement problem is developed which operates by the inversion of an analytically determined eigenvalue Jacobian matrix. Attention is given to convergence characteristics and pathological situations. It is not concluded that the algorithm developed is suitable for computer-aided control system design with particular reference to the scan platform pointing control system for the Galileo spacecraft.

  5. Numerical Methods for Initial Value Problems.

    DTIC Science & Technology

    1980-07-01

    of general multistep methods for ordinary differential equations a4 to implement an efficient algorithm for the solution of stiff equations . Still...integral equations II. Roundoff error for variants of Gaussian elimination III. Multistep methods for ordinary differential equations IV. Multi-grid...62 -! Paige III. NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS ....... 63 1. Equivalent Forms of Multistep

  6. Complete solutions of zoom curves of three-component zoom lenses with the second component fixed.

    PubMed

    Chen, Chaohsien

    2014-10-10

    Purely algebraic algorithms are presented for solving the zoom curves of a three-component zoom lens of which the second component is fixed on zooming. Two separated algorithms for infinite and finite conjugate imaging conditions are provided. For the infinite-conjugate condition, the transverse magnifications of the second and third components are solved to match the required system focal length, resulting in solving a quadratic equation. For the finite-conjugate condition, three nonlinear simultaneous equations regarding the system magnification, the object-to-image thickness, and the position of the second component are combined into a fourth-order polynomial equation. The roots can all be directly obtained by simple algebraic calculations. As a result, the proposed algebraic algorithms provide a more efficient and complete method than do earlier algorithms adopting scanning procedures.

  7. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  8. Static conductivity imaging using variational gradient Bz algorithm in magnetic resonance electrical impedance tomography.

    PubMed

    Park, Chunjae; Park, Eun-Jae; Woo, Eung Je; Kwon, Ohin; Seo, Jin Keun

    2004-02-01

    A new image reconstruction algorithm is proposed to visualize static conductivity images of a subject in magnetic resonance electrical impedance tomography (MREIT). Injecting electrical current into the subject through surface electrodes, we can measure the induced internal magnetic flux density B = (Bx, By, Bz) using an MRI scanner. In this paper, we assume that only the z-component Bz is measurable due to a practical limitation of the measurement technique in MREIT. Under this circumstance, a constructive MREIT imaging technique called the harmonic Bz algorithm was recently developed to produce high-resolution conductivity images. The algorithm is based on the relation between inverted delta2Bz and the conductivity requiring the computation of inverted delta2Bz. Since twice differentiations of noisy Bz data tend to amplify the noise, the performance of the harmonic Bz algorithm is deteriorated when the signal-to-noise ratio in measured Bz data is not high enough. Therefore, it is highly desirable to develop a new algorithm reducing the number of differentiations. In this work, we propose the variational gradient Bz algorithm where Bz is differentiated only once. Numerical simulations with added random noise confirmed its ability to reconstruct static conductivity images in MREIT. We also found that it outperforms the harmonic Bz algorithm in terms of noise tolerance. From a careful analysis of the performance of the variational gradient Bz algorithm, we suggest several methods to further improve the image quality including a better choice of basis functions, regularization technique and multilevel approach. The proposed variational framework utilizing only Bz will lead to different versions of improved algorithms.

  9. Direct Dynamics Simulations using Hessian-based Predictor-corrector Integration Algorithms

    SciTech Connect

    Lourderaj, Upakarasamy; Song, Kihyung; Windus, Theresa L; Zhuang, Yu; Hase, William L

    2007-01-29

    The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. In previous research (J. Chem. Phys. 111, 3800 (1999)) a Hessian-based integration algorithm was derived for performing direct dynamics simulations. In the work presented here, improvements to this algorithm are described. The algorithm has a predictor step based on a local second-order Taylor expansion of the potential in Cartesian coordinates, within a trust radius, and a fifth-order correction to this predicted trajectory. The current algorithm determines the predicted trajectory in Cartesian coordinates, instead of the instantaneous normal mode coordinates used previously, to ensure angular momentum conservation. For the previous algorithm the corrected step was evaluated in rotated Cartesian coordinates. Since the local potential expanded in Cartesian coordinates is not invariant to rotation, the constants of motion are not necessarily conserved during the corrector step. An approximate correction to this shortcoming was made by projecting translation and rotation out of the rotated coordinates. For the current algorithm unrotated Cartesian coordinates are used for the corrected step to assure the constants of motion are conserved. An algorithm is proposed for updating the trust radius to enhance the accuracy and efficiency of the numerical integration. This modified Hessian-based integration algorithm, with its new components, has been implemented into the VENUS/NWChem software package and compared with the velocity-Verlet algorithm for the H₂CO→H₂+CO, O₃+C₃H₆, and F-+CH₃OOH chemical reactions.

  10. ANALYSIS OF A NUMERICAL SOLVER FOR RADIATIVE TRANSPORT EQUATION.

    PubMed

    Gao, Hao; Zhao, Hongkai

    2013-01-01

    We analyze a numerical algorithm for solving radiative transport equation with vacuum or reflection boundary condition that was proposed in [4] with angular discretization by finite element method and spatial discretization by discontinuous Galerkin or finite difference method.

  11. Incompressible viscous flow computations for the pump components and the artificial heart

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin

    1992-01-01

    A finite difference, three dimensional incompressible Navier-Stokes formulation to calculate the flow through turbopump components is utilized. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. Both steady and unsteady flow calculations can be performed using the current algorithm. Here, equations are solved in steadily rotating reference frames by using the steady state formulation in order to simulate the flow through a turbopump inducer. Eddy viscosity is computed by using an algebraic mixing-length turbulence model. Numerical results are compared with experimental measurements and a good agreement is found between the two.

  12. Prognostics for Microgrid Components

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav

    2012-01-01

    Prognostics is the science of predicting future performance and potential failures based on targeted condition monitoring. Moving away from the traditional reliability centric view, prognostics aims at detecting and quantifying the time to impending failures. This advance warning provides the opportunity to take actions that can preserve uptime, reduce cost of damage, or extend the life of the component. The talk will focus on the concepts and basics of prognostics from the viewpoint of condition-based systems health management. Differences with other techniques used in systems health management and philosophies of prognostics used in other domains will be shown. Examples relevant to micro grid systems and subsystems will be used to illustrate various types of prediction scenarios and the resources it take to set up a desired prognostic system. Specifically, the implementation results for power storage and power semiconductor components will demonstrate specific solution approaches of prognostics. The role of constituent elements of prognostics, such as model, prediction algorithms, failure threshold, run-to-failure data, requirements and specifications, and post-prognostic reasoning will be explained. A discussion on performance evaluation and performance metrics will conclude the technical discussion followed by general comments on open research problems and challenges in prognostics.

  13. Numerical recipes for mold filling simulation

    SciTech Connect

    Kothe, D.; Juric, D.; Lam, K.; Lally, B.

    1998-07-01

    Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.

  14. A fast algorithm for image defogging

    NASA Astrophysics Data System (ADS)

    Wang, Xingyu; Guo, Shuai; Wang, Hui; Su, Haibing

    2016-09-01

    For the low visibility and contrast of the foggy image, I propose a single image defogging algorithm. Firstly, change the foggy image from the space of RGB to HSI and divide it into a plurality of blocks. Secondly, elect the maximum point of S component of each block and correct it, keeping H component constant and adjusting I component, so we can estimate fog component through bilinear interpolation. Most importantly, the algorithm deals with the sky region individually. Finally, let the RGB values of all pixels in the blocks minus the fog component and adjust the brightness, so we can obtain the defogging image. Compared with the other algorithms, its efficiency is improved greatly and the image clarity is enhanced. At the same time, the scene is not limited and the scope of application is wide.

  15. Numerical simulation of free surface incompressible liquid flows surrounded by compressible gas

    NASA Astrophysics Data System (ADS)

    Caboussat, A.; Picasso, M.; Rappaz, J.

    2005-03-01

    A numerical model for the three-dimensional simulation of liquid-gas flows with free surfaces is presented. The incompressible Navier-Stokes equations are assumed to hold in the liquid domain. In the gas domain, the velocity is disregarded, the pressure is supposed to be constant in each connected component of the gas domain and follows the ideal gas law. The gas pressure is imposed as a normal force on the liquid-gas interface. An implicit splitting scheme is used to decouple the physical phenomena. Given the gas pressure on the interface, the method described in [J. Comput Phys. 155 (1999) 439; Int. J. Numer. Meth. Fluids 42(7) (2003) 697] is used to track the liquid domain and to compute the velocity and pressure fields in the liquid. Then the connected components of the gas domain are found using an original numbering algorithm. Finally, the gas pressure is updated from the ideal gas law in each connected component of gas. The implementation is validated in the frame of mould filling. Numerical results in two and three space dimensions show that the effect of pressure in the bubbles of gas trapped by the liquid cannot be neglected.

  16. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  17. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  18. Numerical estimation of densities

    NASA Astrophysics Data System (ADS)

    Ascasibar, Y.; Binney, J.

    2005-01-01

    We present a novel technique, dubbed FIESTAS, to estimate the underlying density field from a discrete set of sample points in an arbitrary multidimensional space. FIESTAS assigns a volume to each point by means of a binary tree. Density is then computed by integrating over an adaptive kernel. As a first test, we construct several Monte Carlo realizations of a Hernquist profile and recover the particle density in both real and phase space. At a given point, Poisson noise causes the unsmoothed estimates to fluctuate by a factor of ~2 regardless of the number of particles. This spread can be reduced to about 1dex (~26 per cent) by our smoothing procedure. The density range over which the estimates are unbiased widens as the particle number increases. Our tests show that real-space densities obtained with an SPH kernel are significantly more biased than those yielded by FIESTAS. In phase space, about 10 times more particles are required in order to achieve a similar accuracy. As a second application we have estimated phase-space densities in a dark matter halo from a cosmological simulation. We confirm the results of Arad, Dekel & Klypin that the highest values of f are all associated with substructure rather than the main halo, and that the volume function v(f) ~f-2.5 over about four orders of magnitude in f. We show that a modified version of the toy model proposed by Arad et al. explains this result and suggests that the departures of v(f) from power-law form are not mere numerical artefacts. We conclude that our algorithm accurately measures the phase-space density up to the limit where discreteness effects render the simulation itself unreliable. Computationally, FIESTAS is orders of magnitude faster than the method based on Delaunay tessellation that Arad et al. employed, making it practicable to recover smoothed density estimates for sets of 109 points in six dimensions.

  19. Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization.

    PubMed

    Chen, Ying-ping; Chen, Chao-Hong

    2010-01-01

    An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.

  20. Numerical treatment of shocks in unsteady potential flow computation

    NASA Astrophysics Data System (ADS)

    Schippers, H.

    1985-04-01

    For moving shocks in unsteady transonic potential flow, an implicit fully-conservative finite-difference algorithm is presented. It is based on time-linearization and mass-flux splitting. For the one-dimensional problem of a traveling shock-wave, this algorithm is compared with the method of Goorjian and Shankar. The algorithm was implemented in the computer program TULIPS for the computation of transonic unsteady flow about airfoils. Numerical results for a pitching ONERA M6 airfoil are presented.

  1. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  2. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  3. An efficient algorithm for geocentric to geodetic coordinate conversion

    SciTech Connect

    Toms, R.M.

    1995-09-01

    The problem of performing transformations from geocentric to geodetic coordinates has received an inordinate amount of attention in the literature. Numerous approximate methods have been published. Almost none of the publications address the issue of efficiency and in most cases there is a paucity of error analysis. Recently there has been a surge of interest in this problem aimed at developing more efficient methods for real time applications such as DIS. Iterative algorithms have been proposed that are not of optimal efficiency, address only one error component and require a small but uncertain number of relatively expensive iterations for convergence. In this paper a well known rapidly convergent iterative approach is modified to eliminate intervening trigonometric function evaluations. A total error metric is defined that accounts for both angular and altitude errors. The initial guess is optimized to minimize the error for one iteration. The resulting algorithm yields transformations correct to one centimeter for altitudes out to one million kilometers. Due to the rapid convergence only one iteration is used and no stopping test is needed. This algorithm is discussed in the context of machines that have FPUs and legacy machines that utilize mathematical subroutine packages.

  4. A numerical technique for the calculation of cloud optical extinction from lidar

    NASA Technical Reports Server (NTRS)

    Alvarez, J. M.; Vaughan, M. A.

    1993-01-01

    A simple numerical algorithm which calculates optical extinction from cloud lidar data is presented. The method assumes a two-component atmosphere consisting of 'clear air' and cloud particulates. 'Clear air' may consist of either molecules only or a mix of molecules and atmospheric aerosols. For certain clouds, the method may be utilized to provide an estimate of the cloud-atmospheric parameter defined as the ratio of the cloud volume backscatter coefficient to the cloud extinction coefficient divided by the atmospheric volume backscatter coefficient at a given altitude. The cloud-atmospheric parameter may be estimated only from cloud data from which the optical thickness may reliably be used as a constraint on the numerical solution. This constraint provides the additional information necessary to obtain the cloud-atmospheric parameter. Conversely, the method may be applied to obtain cloud extinction and optical thickness from lidar cloud soundings if an estimate of the cloud-atmospheric parameter is available.

  5. Numerical Simulation of a Solar Domestic Hot Water System

    NASA Astrophysics Data System (ADS)

    Mongibello, L.; Bianco, N.; Di Somma, M.; Graditi, G.; Naso, V.

    2014-11-01

    An innovative transient numerical model is presented for the simulation of a solar Domestic Hot Water (DHW) system. The solar collectors have been simulated by using a zerodimensional analytical model. The temperature distributions in the heat transfer fluid and in the water inside the tank have been evaluated by one-dimensional models. The reversion elimination algorithm has been used to include the effects of natural convection among the water layers at different heights in the tank on the thermal stratification. A finite difference implicit scheme has been implemented to solve the energy conservation equation in the coil heat exchanger, and the energy conservation equation in the tank has been solved by using the finite difference Euler implicit scheme. Energy conservation equations for the solar DHW components models have been coupled by means of a home-made implicit algorithm. Results of the simulation performed using as input data the experimental values of the ambient temperature and the solar irradiance in a summer day are presented and discussed.

  6. Extremal polynomials and methods of optimization of numerical algorithms

    SciTech Connect

    Lebedev, V I

    2004-10-31

    Chebyshev-Markov-Bernstein-Szegoe polynomials C{sub n}(x) extremal on [-1,1] with weight functions w(x)=(1+x){sup {alpha}}(1- x){sup {beta}}/{radical}(S{sub l}(x)) where {alpha},{beta}=0,1/2 and S{sub l}(x)={pi}{sub k=1}{sup m}(1-c{sub k}T{sub l{sub k}}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w{sup 2}(x)(1-x{sup 2}){sup -1/2}. The parameters of optimal Chebyshev iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshev filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

  7. Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena

    DTIC Science & Technology

    2009-01-30

    couple these elements using Finite- Volume-like surface Riemann solvers. This hybrid, dual-layer design allows DGTD to combine advantages from both of...electromagnetic waves , J. Comput. Phys., 114 (1994), pp. 185-200. [62] F. COLLINO, High order absorbing boundary conditions for wave propagation...formulation with high order absorbing boundary conditions for time-dependent waves , Comput. Meth. Appl. Mech., 195 (2006), pp. 3666-3690. [69] M. GUDDATI

  8. Novel Numerical Algorithms for Sensing, Discrimination, and Control

    DTIC Science & Technology

    1990-03-09

    throughput specifications. At the front end of the imaging data stream, individual pixel level processing for filtering, contrasting, contouring requires...100 MHz clock rates for 32-bit data arriving at 10-40 MHz frame rates. Imaging processing at the back end looks for patterns or "global" parameters and...observations suggest, however, that a CZT requires 2N log N operations whereas the FFT requires only N log N operations. Hence, the CZT is not

  9. An algorithm for solving the fractional convection diffusion equation with nonlinear source term

    NASA Astrophysics Data System (ADS)

    Momani, Shaher

    2007-10-01

    In this paper an algorithm based on Adomian's decomposition method is developed to approximate the solution of the nonlinear fractional convection-diffusion equation {∂αu}/{∂tα}={∂2u}/{∂x2}-c{∂u}/{∂x}+Ψ(u)+f(x,t),00. The fractional derivative is considered in the Caputo sense. The approximate solutions are calculated in the form of a convergent series with easily computable components. The analysis is accompanied by numerical examples and the obtained results are found to be in good agreement with the exact solutions known for some special cases.

  10. Numerical methods for problems involving the Drazin inverse

    NASA Technical Reports Server (NTRS)

    Meyer, C. D., Jr.

    1979-01-01

    The objective was to try to develop a useful numerical algorithm for the Drazin inverse and to analyze the numerical aspects of the applications of the Drazin inverse relating to the study of homogeneous Markov chains and systems of linear differential equations with singular coefficient matrices. It is felt that all objectives were accomplished with a measurable degree of success.

  11. Algorithm for in-flight gyroscope calibration

    NASA Technical Reports Server (NTRS)

    Davenport, P. B.; Welter, G. L.

    1988-01-01

    An optimal algorithm for the in-flight calibration of spacecraft gyroscope systems is presented. Special consideration is given to the selection of the loss function weight matrix in situations in which the spacecraft attitude sensors provide significantly more accurate information in pitch and yaw than in roll, such as will be the case in the Hubble Space Telescope mission. The results of numerical tests that verify the accuracy of the algorithm are discussed.

  12. Jamming cancellation algorithm for wideband imaging radar

    NASA Astrophysics Data System (ADS)

    Zheng, Yibin; Yu, Kai-Bor

    1998-10-01

    We describe a jamming cancellation algorithm for wide-band imaging radar. After reviewing high range resolution imaging principle, several key factors affecting jamming cancellation performances, such as the 'instantaneous narrow-band' assumption, bandwidth, de-chirped interference, are formulated and analyzed. Some numerical simulation results, using a hypothetical phased array radar and synthetic point targets, are presented. The results demonstrated the effectiveness of the proposed algorithm.

  13. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  14. Data driven components in a model of inner shelf sorted bedforms: a new hybrid model

    NASA Astrophysics Data System (ADS)

    Goldstein, E. B.; Coco, G.; Murray, A. B.; Green, M. O.

    2013-10-01

    Numerical models rely on the parameterization of processes that often lack a deterministic description. In this contribution we demonstrate the applicability of using machine learning, optimization tools from the discipline of computer science, to develop parameterizations when extensive data sets exist. We develop a new predictor for near bed suspended sediment reference concentration under unbroken waves using genetic programming, a machine learning technique. This newly developed parameterization performs better than existing empirical predictors. We add this new predictor into an established model for inner shelf sorted bedforms. Additionally we incorporate a previously reported machine learning derived predictor for oscillatory flow ripples into the sorted bedform model. This new "hybrid" sorted bedform model, whereby machine learning components are integrated into a numerical model, demonstrates a method of incorporating observational data (filtered through a machine learning algorithm) directly into a numerical model. Results suggest that the new hybrid model is able to capture dynamics previously absent from the model, specifically, the two observed pattern modes of sorted bedforms. However, caveats exist when data driven components do not have parity with traditional theoretical components of morphodynamic models, and we discuss the challenges of integrating these disparate pieces and the future of this type of modeling.

  15. Mathematical and computer modeling of component surface shaping

    NASA Astrophysics Data System (ADS)

    Lyashkov, A.

    2016-04-01

    The process of shaping technical surfaces is an interaction of a tool (a shape element) and a component (a formable element or a workpiece) in their relative movements. It was established that the main objects of formation are: 1) a discriminant of a surfaces family, formed by the movement of the shape element relatively the workpiece; 2) an enveloping model of the real component surface obtained after machining, including transition curves and undercut lines; 3) The model of cut-off layers obtained in the process of shaping. When modeling shaping objects there are a lot of insufficiently solved or unsolved issues that make up a single scientific problem - a problem of qualitative shaping of the surface of the tool and then the component surface produced by this tool. The improvement of known metal-cutting tools, intensive development of systems of their computer-aided design requires further improvement of the methods of shaping the mating surfaces. In this regard, an important role is played by the study of the processes of shaping of technical surfaces with the use of the positive aspects of analytical and numerical mathematical methods and techniques associated with the use of mathematical and computer modeling. The author of the paper has posed and has solved the problem of development of mathematical, geometric and algorithmic support of computer-aided design of cutting tools based on computer simulation of the shaping process of surfaces.

  16. Numerical simulation of steady supersonic flow. [spatial marching

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.; Steger, J. L.

    1981-01-01

    A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.

  17. Driven one-component plasmas

    SciTech Connect

    Rizzato, Felipe B.; Pakter, Renato; Levin, Yan

    2009-08-15

    A statistical theory is presented that allows the calculation of the stationary state achieved by a driven one-component plasma after a process of collisionless relaxation. The stationary Vlasov equation with appropriate boundary conditions is reduced to an ordinary differential equation, which is then solved numerically. The solution is then compared with the molecular-dynamics simulation. A perfect agreement is found between the theory and the simulations. The full current-voltage phase diagram is constructed.

  18. Multiple-source multiple-harmonic active vibration control of variable section cylindrical structures: A numerical study

    NASA Astrophysics Data System (ADS)

    Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu

    2016-12-01

    Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.

  19. New SIMD Algorithms for Cluster Labeling on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Coddington, Paul; Marinari, Enzo

    Cluster algorithms are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models of magnets. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two-dimensional Ising spin model. These algorithms could also be applied to other problems which use connected component labeling, such as percolation and image analysis.

  20. Clustering of Hadronic Showers with a Structural Algorithm

    SciTech Connect

    Charles, M.J.; /SLAC

    2005-12-13

    The internal structure of hadronic showers can be resolved in a high-granularity calorimeter. This structure is described in terms of simple components and an algorithm for reconstruction of hadronic clusters using these components is presented. Results from applying this algorithm to simulated hadronic Z-pole events in the SiD concept are discussed.

  1. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  2. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  3. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  4. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  5. An intelligent multi-restart memetic algorithm for box constrained global optimisation.

    PubMed

    Sun, J; Garibaldi, J M; Krasnogor, N; Zhang, Q

    2013-01-01

    In this paper, we propose a multi-restart memetic algorithm framework for box constrained global continuous optimisation. In this framework, an evolutionary algorithm (EA) and a local optimizer are employed as separated building blocks. The EA is used to explore the search space for very promising solutions (e.g., solutions in the attraction basin of the global optimum) through its exploration capability and previous EA search history, and local search is used to improve these promising solutions to local optima. An estimation of distribution algorithm (EDA) combined with a derivative free local optimizer, called NEWUOA (M. Powell, Developments of NEWUOA for minimization without derivatives. Journal of Numerical Analysis, 28:649-664, 2008), is developed based on this framework and empirically compared with several well-known EAs on a set of 40 commonly used test functions. The main components of the specific algorithm include: (1) an adaptive multivariate probability model, (2) a multiple sampling strategy, (3) decoupling of the hybridisation strategy, and (4) a restart mechanism. The adaptive multivariate probability model and multiple sampling strategy are designed to enhance the exploration capability. The restart mechanism attempts to make the search escape from local optima, resorting to previous search history. Comparison results show that the algorithm is comparable with the best known EAs, including the winner of the 2005 IEEE Congress on Evolutionary Computation (CEC2005), and significantly better than the others in terms of both the solution quality and computational cost.

  6. Numerical approaches to simulation of multi-core fibers

    NASA Astrophysics Data System (ADS)

    Chekhovskoy, I. S.; Paasonen, V. I.; Shtyrina, O. V.; Fedoruk, M. P.

    2017-04-01

    We propose generalizations of two numerical algorithms to solve the system of linearly coupled nonlinear Schrödinger equations (NLSEs) describing the propagation of light pulses in multi-core optical fibers. An iterative compact dissipative second-order accurate in space and fourth-order accurate in time scheme is the first numerical method. This compact scheme has strong stability due to inclusion of the additional dissipative term. The second algorithm is a generalization of the split-step Fourier method based on Padé approximation of the matrix exponential. We compare a computational efficiency of both algorithms and show that the compact scheme is more efficient in terms of performance for solving a large system of coupled NLSEs. We also present the parallel implementation of the numerical algorithms for shared memory systems using OpenMP.

  7. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  8. Simplified method for numerical modeling of fiber lasers.

    PubMed

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

    2014-12-29

    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  9. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  10. Monte Carlo algorithms for lattice gauge theory

    SciTech Connect

    Creutz, M.

    1987-05-01

    Various techniques are reviewed which have been used in numerical simulations of lattice gauge theories. After formulating the problem, the Metropolis et al. algorithm and some interesting variations are discussed. The numerous proposed schemes for including fermionic fields in the simulations are summarized. Langevin, microcanonical, and hybrid approaches to simulating field theories via differential evolution in a fictitious time coordinate are treated. Some speculations are made on new approaches to fermionic simulations.

  11. Numerical integration using Wang Landau sampling

    NASA Astrophysics Data System (ADS)

    Li, Y. W.; Wüst, T.; Landau, D. P.; Lin, H. Q.

    2007-09-01

    We report a new application of Wang-Landau sampling to numerical integration that is straightforward to implement. It is applicable to a wide variety of integrals without restrictions and is readily generalized to higher-dimensional problems. The feasibility of the method results from a reinterpretation of the density of states in statistical physics to an appropriate measure for numerical integration. The properties of this algorithm as a new kind of Monte Carlo integration scheme are investigated with some simple integrals, and a potential application of the method is illustrated by the evaluation of integrals arising in perturbation theory of quantum many-body systems.

  12. A genetic algorithm for solving supply chain network design model

    NASA Astrophysics Data System (ADS)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  13. Numerical approach of the quantum circuit theory

    NASA Astrophysics Data System (ADS)

    Silva, J. J. B.; Duarte-Filho, G. C.; Almeida, F. A. G.

    2017-03-01

    In this paper we develop a numerical method based on the quantum circuit theory to approach the coherent electronic transport in a network of quantum dots connected with arbitrary topology. The algorithm was employed in a circuit formed by quantum dots connected each other in a shape of a linear chain (associations in series), and of a ring (associations in series, and in parallel). For both systems we compute two current observables: conductance and shot noise power. We find an excellent agreement between our numerical results and the ones found in the literature. Moreover, we analyze the algorithm efficiency for a chain of quantum dots, where the mean processing time exhibits a linear dependence with the number of quantum dots in the array.

  14. Numerical Simulation of a Convective Turbulence Encounter

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.; Bowles, Roland L.

    2002-01-01

    A numerical simulation of a convective turbulence event is investigated and compared with observational data. The numerical results show severe turbulence of similar scale and intensity to that encountered during the test flight. This turbulence is associated with buoyant plumes that penetrate the upper-level thunderstorm outflow. The simulated radar reflectivity compares well with that obtained from the aircraft's onboard radar. Resolved scales of motion as small as 50 m are needed in order to accurately diagnose aircraft normal load accelerations. Given this requirement, realistic turbulence fields may be created by merging subgrid-scales of turbulence to a convective-cloud simulation. A hazard algorithm for use with model data sets is demonstrated. The algorithm diagnoses the RMS normal loads from second moments of the vertical velocity field and is independent of aircraft motion.

  15. Small Body GN&C Research Report: A Robust Model Predictive Control Algorithm with Guaranteed Resolvability

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet A.; Carson, John M., III

    2005-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.

  16. A robust model predictive control algorithm for uncertain nonlinear systems that guarantees resolvability

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Carson, John M., III

    2006-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.

  17. A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1989-01-01

    Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.

  18. [Study of the algorithm for inversion of low field nuclear magnetic resonance relaxation distribution].

    PubMed

    Chen, Shanshan; Wang, Hongzhi; Yang, Peiqiang; Zhang, Xuelong

    2014-06-01

    It is difficult to reflect the properties of samples from the signal directly collected by the low field nuclear magnetic resonance (NMR) analyzer. People must obtain the relationship between the relaxation time and the original signal amplitude of every relaxation component by inversion algorithm. Consequently, the technology of T2 spectrum inversion is crucial to the application of NMR data. This study optimized the regularization factor selection method and presented the regularization algorithm for inversion of low field NMR relaxation distribution, which is based on the regularization theory of ill-posed inverse problem. The results of numerical simulation experiments by Matlab7.0 showed that this method could effectively analyze and process the NMR relaxation data.

  19. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  20. [A new algorithm for NIR modeling based on manifold learning].

    PubMed

    Hong, Ming-Jian; Wen, Zhi-Yu; Zhang, Xiao-Hong; Wen, Quan

    2009-07-01

    Manifold learning is a new kind of algorithm originating from the field of machine learning to find the intrinsic dimensionality of numerous and complex data and to extract most important information from the raw data to develop a regression or classification model. The basic assumption of the manifold learning is that the high-dimensional data measured from the same object using some devices must reside on a manifold with much lower dimensions determined by a few properties of the object. While NIR spectra are characterized by their high dimensions and complicated band assignment, the authors may assume that the NIR spectra of the same kind of substances with different chemical concentrations should reside on a manifold with much lower dimensions determined by the concentrations, according to the above assumption. As one of the best known algorithms of manifold learning, locally linear embedding (LLE) further assumes that the underlying manifold is locally linear. So, every data point in the manifold should be a linear combination of its neighbors. Based on the above assumptions, the present paper proposes a new algorithm named least square locally weighted regression (LS-LWR), which is a kind of LWR with weights determined by the least squares instead of a predefined function. Then, the NIR spectra of glucose solutions with various concentrations are measured using a NIR spectrometer and LS-LWR is verified by predicting the concentrations of glucose solutions quantitatively. Compared with the existing algorithms such as principal component regression (PCR) and partial least squares regression (PLSR), the LS-LWR has better predictability measured by the standard error of prediction (SEP) and generates an elegant model with good stability and efficiency.

  1. Numerical simulation of in situ bioremediation

    SciTech Connect

    Travis, B.J.

    1998-12-31

    Models that couple subsurface flow and transport with microbial processes are an important tool for assessing the effectiveness of bioremediation in field applications. A numerical algorithm is described that differs from previous in situ bioremediation models in that it includes: both vadose and groundwater zones, unsteady air and water flow, limited nutrients and airborne nutrients, toxicity, cometabolic kinetics, kinetic sorption, subgridscale averaging, pore clogging and protozoan grazing.

  2. On numerical simulation of viscous flows

    NASA Astrophysics Data System (ADS)

    Ghia, K. N.; Ghia, U.

    Numerical simulation methods for viscous incompressible laminar flows are reviewed, with a focus on finite-difference schemes. The approaches to high/moderate-Reynolds-number flows (strong-viscous-interaction model or single sets of equations) and the factors affecting the versatility, reliability, and accuracy of the analysis algorithms are considered; approximate-factorization implicit solution techniques for low-Reynolds-number flows are discussed; and the procedures used in a number of specific problems are indicated.

  3. Numerical Modeling of Nanoelectronic Devices

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Oyafuso, Fabiano; Bowen, R. Chris; Boykin, Timothy

    2003-01-01

    Nanoelectronic Modeling 3-D (NEMO 3-D) is a computer program for numerical modeling of the electronic structure properties of a semiconductor device that is embodied in a crystal containing as many as 16 million atoms in an arbitrary configuration and that has overall dimensions of the order of tens of nanometers. The underlying mathematical model represents the quantummechanical behavior of the device resolved to the atomistic level of granularity. The system of electrons in the device is represented by a sparse Hamiltonian matrix that contains hundreds of millions of terms. NEMO 3-D solves the matrix equation on a Beowulf-class cluster computer, by use of a parallel-processing matrix vector multiplication algorithm coupled to a Lanczos and/or Rayleigh-Ritz algorithm that solves for eigenvalues. In a recent update of NEMO 3-D, a new strain treatment, parameterized for bulk material properties of GaAs and InAs, was developed for two tight-binding submodels. The utility of the NEMO 3-D was demonstrated in an atomistic analysis of the effects of disorder in alloys and, in particular, in bulk In(x)Ga(l-x)As and in In0.6Ga0.4As quantum dots.

  4. Hybrid Experimental-Numerical Stress Analysis.

    DTIC Science & Technology

    1983-04-01

    components# biomechanics and fracture mechanics. .4 ELASTIC ANALYSIS OF STRUCTURAL COMPONENTS The numerical techniques used In modern hybrid technique for...measured E24] relations of probe force versus probe area under applanation tonametry. ELASTIC-PASTIC FRACTURE MECHANICS Fracture parameters governing...models of the crack. Strain energy release rate and stress intensity factor in linear elastic fracture mechanics, which is a well established analog

  5. Seislet-based morphological component analysis using scale-dependent exponential shrinkage

    NASA Astrophysics Data System (ADS)

    Yang, Pengliang; Fomel, Sergey

    2015-07-01

    Morphological component analysis (MCA) is a powerful tool used in image processing to separate different geometrical components (cartoons and textures, curves and points etc.). MCA is based on the observation that many complex signals may not be sparsely represented using only one dictionary/transform, however can have sparse representation by combining several over-complete dictionaries/transforms. In this paper we propose seislet-based MCA for seismic data processing. MCA algorithm is reformulated in the shaping-regularization framework. Successful seislet-based MCA depends on reliable slope estimation of seismic events, which is done by plane-wave destruction (PWD) filters. An exponential shrinkage operator unifies many existing thresholding operators and is adopted in scale-dependent shaping regularization to promote sparsity. Numerical examples demonstrate a superior performance of the proposed exponential shrinkage operator and the potential of seislet-based MCA in application to trace interpolation and multiple removal.

  6. Massively Parallel Algorithms for Solution of Schrodinger Equation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barhen, Jacob; Toomerian, Nikzad

    1994-01-01

    In this paper massively parallel algorithms for solution of Schrodinger equation are developed. Our results clearly indicate that the Crank-Nicolson method, in addition to its excellent numerical properties, is also highly suitable for massively parallel computation.

  7. RELAP-7 Pressurizer Component Development Updates

    SciTech Connect

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling; Martineau, Richard; Holten, Michael; Wu, Qiao

    2016-03-01

    RELAP-7 is a nuclear systems safety analysis code being developed at the Idaho National Laboratory (INL). RELAP-7 development began in 2011 to support the Risk Informed Safety Margins Characterization (RISMC) Pathway of the Light Water Reactor Sustainability (LWRS) program. The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical methods, and physical models in order to provide capabilities needed for the RISMC methodology and to support nuclear power safety analysis. The code is being developed based on Idaho National Laboratory’s modern scientific software development framework – MOOSE (the Multi-Physics Object-Oriented Simulation Environment). The initial development goal of the RELAP-7 approach focused primarily on the development of an implicit algorithm capable of strong (nonlinear) coupling of the dependent hydrodynamic variables contained in the 1-D/2-D flow models with the various 0-D system reactor components that compose various boiling water reactor (BWR) and pressurized water reactor nuclear power plants (NPPs). As part of the efforts to expand the capability for PWR simulation, an equilibrium single-region pressurizer model has been implemented in RELAP-7. The pressurizer component can simulate pressure and water level change through insurge, spray, and heating processes. Two simple tests – one for insurge process and another for outsurge process – have been reported to demonstrate and verify the functions of the pressurizer model. The typical single-phase PWR system model presented in the first RELAP-7 milestone report has been updated, as part of system level test for the new pressurizer model. The updated PWR system model with the pressurizer component can be used for more realistic transient simulations. The addition of the equilibrium single-region pressurizer model represents the first step of developing a suite of pressurizer models with

  8. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  9. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  10. An extension of the QZ algorithm for solving the generalized matrix eigenvalue problem

    NASA Technical Reports Server (NTRS)

    Ward, R. C.

    1973-01-01

    This algorithm is an extension of Moler and Stewart's QZ algorithm with some added features for saving time and operations. Also, some additional properties of the QR algorithm which were not practical to implement in the QZ algorithm can be generalized with the combination shift QZ algorithm. Numerous test cases are presented to give practical application tests for algorithm. Based on results, this algorithm should be preferred over existing algorithms which attempt to solve the class of generalized eigenproblems where both matrices are singular or nearly singular.

  11. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  12. An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP

    NASA Astrophysics Data System (ADS)

    Moncet, J. L.

    2015-12-01

    We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from

  13. Numerical and Analytic Studies of Random-Walk Models.

    NASA Astrophysics Data System (ADS)

    Li, Bin

    We begin by recapitulating the universality approach to problems associated with critical systems, and discussing the role that random-walk models play in the study of phase transitions and critical phenomena. As our first numerical simulation project, we perform high-precision Monte Carlo calculations for the exponents of the intersection probability of pairs and triplets of ordinary random walks in 2 dimensions, in order to test the conformal-invariance theory predictions. Our numerical results strongly support the theory. Our second numerical project aims to test the hyperscaling relation dnu = 2 Delta_4-gamma for self-avoiding walks in 2 and 3 dimensions. We apply the pivot method to generate pairs of self-avoiding walks, and then for each pair, using the Karp-Luby algorithm, perform an inner -loop Monte Carlo calculation of the number of different translates of one walk that makes at least one intersection with the other. Applying a least-squares fit to estimate the exponents, we have obtained strong numerical evidence that the hyperscaling relation is true in 3 dimensions. Our great amount of data for walks of unprecedented length(up to 80000 steps), yield a updated value for the end-to-end distance and radius of gyration exponent nu = 0.588 +/- 0.001 (95% confidence limit), which comes out in good agreement with the renormalization -group prediction. In an analytic study of random-walk models, we introduce multi-colored random-walk models and generalize the Symanzik and B.F.S. random-walk representations to the multi-colored case. We prove that the zero-component lambdavarphi^2psi^2 theory can be represented by a two-color mutually -repelling random-walk model, and it becomes the mutually -avoiding walk model in the limit lambda to infty. However, our main concern and major break-through lies in the study of the two-point correlation function for the lambda varphi^2psi^2 theory with N > 0 components. By representing it as a two-color random-walk expansion

  14. Numerical study for the calculation of computer-generated hologram in color holographic 3D projection enabled by modified wavefront recording plane method

    NASA Astrophysics Data System (ADS)

    Chang, Chenliang; Qi, Yijun; Wu, Jun; Yuan, Caojin; Nie, Shouping; Xia, Jun

    2017-03-01

    A method of calculating computer-generated hologram (CGH) for color holographic 3D projection is proposed. A color 3D object is decomposed into red, green and blue components. For each color component, a virtual wavefront recording plane (WRP) is established which is nonuniformly sampled according to the depth map of the 3D object. The hologram of each color component is calculated from the nonuniform sampled WRP using the shifted Fresnel diffraction algorithm. Finally three holograms of RGB components are encoded into one single CGH based on the multiplexing encoding method. The computational cost of CGH generation is reduced by converting diffraction calculation from huge 3D voxels to three 2D planar images. Numerical experimental results show that the CGH generated by our method is capable to project zoomable color 3D object with clear quality.

  15. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  16. Adaptive phase aberration correction based on imperialist competitive algorithm.

    PubMed

    Yazdani, R; Hajimahmoodzadeh, M; Fallah, H R

    2014-01-01

    We investigate numerically the feasibility of phase aberration correction in a wavefront sensorless adaptive optical system, based on the imperialist competitive algorithm (ICA). Considering a 61-element deformable mirror (DM) and the Strehl ratio as the cost function of ICA, this algorithm is employed to search the optimum surface profile of DM for correcting the phase aberrations in a solid-state laser system. The correction results show that ICA is a powerful correction algorithm for static or slowly changing phase aberrations in optical systems, such as solid-state lasers. The correction capability and the convergence speed of this algorithm are compared with those of the genetic algorithm (GA) and stochastic parallel gradient descent (SPGD) algorithm. The results indicate that these algorithms have almost the same correction capability. Also, ICA and GA are almost the same in convergence speed and SPGD is the fastest of these algorithms.

  17. Distilling the Verification Process for Prognostics Algorithms

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai

    2013-01-01

    The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.

  18. What Is Numerical Control?

    ERIC Educational Resources Information Center

    Goold, Vernell C.

    1977-01-01

    Numerical control (a technique involving coded, numerical instructions for the automatic control and performance of a machine tool) does not replace fundamental machine tool training. It should be added to the training program to give the student an additional tool to accomplish production rates and accuracy that were not possible before. (HD)

  19. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  20. Generalization of the FDTD algorithm for simulations of hydrodynamic nonlinear Drude model

    SciTech Connect

    Liu Jinjie; Brio, Moysey; Zeng Yong; Zakharian, Armis R.; Hoyer, Walter; Koch, Stephan W.; Moloney, Jerome V.

    2010-08-20

    In this paper we present a numerical method for solving a three-dimensional cold-plasma system that describes electron gas dynamics driven by an external electromagnetic wave excitation. The nonlinear Drude dispersion model is derived from the cold-plasma fluid equations and is coupled to the Maxwell's field equations. The Finite-Difference Time-Domain (FDTD) method is applied for solving the Maxwell's equations in conjunction with the time-split semi-implicit numerical method for the nonlinear dispersion and a physics based treatment of the discontinuity of the electric field component normal to the dielectric-metal interface. The application of the proposed algorithm is illustrated by modeling light pulse propagation and second-harmonic generation (SHG) in metallic metamaterials (MMs), showing good agreement between computed and published experimental results.