Science.gov

Sample records for point decomposition algorithm

  1. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  2. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  3. Algorithms for the Markov entropy decomposition

    NASA Astrophysics Data System (ADS)

    Ferris, Andrew J.; Poulin, David

    2013-05-01

    The Markov entropy decomposition (MED) is a recently proposed, cluster-based simulation method for finite temperature quantum systems with arbitrary geometry. In this paper, we detail numerical algorithms for performing the required steps of the MED, principally solving a minimization problem with a preconditioned Newton's algorithm, as well as how to extract global susceptibilities and thermal responses. We demonstrate the power of the method with the spin-1/2 XXZ model on the 2D square lattice, including the extraction of critical points and details of each phase. Although the method shares some qualitative similarities with exact diagonalization, we show that the MED is both more accurate and significantly more flexible.

  4. Finding corner point correspondence from wavelet decomposition of image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; LeMoigne, Jacqueline

    1997-01-01

    A time efficient algorithm for image registration between two images that differ in translation is discussed. The algorithm is based on coarse-fine strategy using wavelet decomposition of both the images. The wavelet decomposition serves two different purposes: (1) its high frequency components are used to detect feature points (corner points here) and (2) it provides coarse-to-fine structure for making the algorithm time efficient. The algorithm is based on detecting the corner points from one of the images called reference image and computing corresponding points from the other image called test image by using local correlations using 7x7 windows centered around the corner points. The corresponding points are detected at the lowest decomposition level in a search area of about 11x11 (depending on the translation) and potential points of correspondence are projected onto higher levels. In the subsequent levels the local correlations are computed in a search area of no more than 3x3 for refinement of the correspondence.

  5. Domain decomposition algorithms and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined.

  6. Domain decomposition algorithms and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined.

  7. Conception of discrete systems decomposition algorithm using p-invariants and hypergraphs

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Ł.

    2016-09-01

    In the article author presents an idea of decomposition algorithm of discrete systems described by Petri Nets using pinvariants. Decomposition process is significant from the point of view of discrete systems design, because it allows separation of the smaller sequential parts. Proposed algorithm uses modified Martinez-Silva method as well as author's selection algorithm. The developed method is a good complement of classical decomposition algorithms using graphs and hypergraphs.

  8. Faster Algorithms on Branch and Clique Decompositions

    NASA Astrophysics Data System (ADS)

    Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin

    We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.

  9. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  10. Domain decomposition algorithms and computation fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.

  11. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  12. Avoiding spurious submovement decompositions : a globally optimal algorithm.

    SciTech Connect

    Rohrer, Brandon Robinson; Hogan, Neville

    2003-07-01

    Evidence for the existence of discrete submovements underlying continuous human movement has motivated many attempts to extract them. Although they produce visually convincing results, all of the methodologies that have been employed are prone to produce spurious decompositions. Examples of potential failures are given. A branch-and-bound algorithm for submovement extraction, capable of global nonlinear minimization (and hence capable of avoiding spurious decompositions), is developed and demonstrated.

  13. Enhanced decomposition algorithm for multistage stochastic hydroelectric scheduling. Technical report

    SciTech Connect

    Morton, D.P.

    1994-01-01

    Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.

  14. A Decomposition Framework for Image Denoising Algorithms.

    PubMed

    Ghimpeteanu, Gabriela; Batard, Thomas; Bertalmio, Marcelo; Levine, Stacey

    2016-01-01

    In this paper, we consider an image decomposition model that provides a novel framework for image denoising. The model computes the components of the image to be processed in a moving frame that encodes its local geometry (directions of gradients and level lines). Then, the strategy we develop is to denoise the components of the image in the moving frame in order to preserve its local geometry, which would have been more affected if processing the image directly. Experiments on a whole image database tested with several denoising methods show that this framework can provide better results than denoising the image directly, both in terms of Peak signal-to-noise ratio and Structural similarity index metrics.

  15. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGES

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  16. Incremental k-core decomposition: Algorithms and evaluation

    SciTech Connect

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; Wu, Kun -Lung; Catalyurek, Umit V.

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that is guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.

  17. An algorithm for domain decomposition in finite element analysis

    NASA Technical Reports Server (NTRS)

    Al-Nasra, M.; Nguyen, D. T.

    1991-01-01

    A simple and efficient algorithm is described for automatic decomposition of an arbitrary finite element domain into a specified number of subdomains for finite element and substructuring analysis in a multiprocessor computer environment. The algorithm is designed to balance the work loads, to minimize the communication among processors and to minimize the bandwidths of the resulting system of equations. Small- to large-scale finite element models, which have two-node elements (truss, beam element), three-node elements (triangular element) and four-node elements (quadrilateral element), are solved on the Convex computer to illustrate the effectiveness of the proposed algorithm. A FORTRAN computer program is also included.

  18. An Algorithm for image removals and decompositions without inverse matrices

    NASA Astrophysics Data System (ADS)

    Yi, Dokkyun

    2009-03-01

    Partial Differential Equation (PDE) based methods in image processing have been actively studied in the past few years. One of the effective methods is the method based on a total variation introduced by Rudin, Oshera and Fatemi (ROF) [L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60 (1992) 259-268]. This method is a well known edge preserving model and an useful tool for image removals and decompositions. Unfortunately, this method has a nonlinear term in the equation which may yield an inaccurate numerical solution. To overcome the nonlinearity, a fixed point iteration method has been widely used. The nonlinear system based on the total variation is induced from the ROF model and the fixed point iteration method to solve the ROF model is introduced by Dobson and Vogel [D.C. Dobson, C.R. Vogel, Convergence of an iterative method for total variation denoising, SIAM J. Numer. Anal. 34 (5) (1997) 1779-1791]. However, some methods had to compute inverse matrices which led to roundoff error. To address this problem, we developed an efficient method for solving the ROF model. We make a sequence like Richardson's method by using a fixed point iteration to evade the nonlinear equation. This approach does not require the computation of inverse matrices. The main idea is to make a direction vector for reducing the error at each iteration step. In other words, we make the next iteration to reduce the error from the computed error and the direction vector. We describe that our method works well in theory. In numerical experiments, we show the results of the proposed method and compare them with the results by D. Dobson and C. Vogel and then we confirm the superiority of our method.

  19. Efficient variants of the vertex space domain decomposition algorithm

    SciTech Connect

    Chan, T.F.; Shao, J.P. . Dept. of Mathematics); Mathew, T.P. . Dept. of Mathematics)

    1994-11-01

    Several variants of the vertex space algorithm of Smith for two-dimensional elliptic problems are described. The vertex space algorithm is a domain decomposition method based on nonoverlapping subregions, in which the reduced Schur complement system on the interface is solved using a generalized block Jacobi-type preconditioner, with the blocks corresponding to the vertex space, edges, and a coarse grid. Two kinds of approximations are considered for the edge and vertex space subblocks, one based on Fourier approximation, and another based on an algebraic probing technique in which sparse approximations to these subblocks are computed. The motivation is to improve the efficiency of the algorithm without sacrificing the optimal convergence rate. Numerical and theoretical results on the performance of these algorithms, including variants of an algorithm of Bramble, Pasciak, and Schatz are presented.

  20. Training for Retrieval of Knowledge under Stress through Algorithmic Decomposition

    DTIC Science & Technology

    1986-10-01

    impaired ability to read . Two percent of all first graders have dyslexia . A screening test for dyslexia has recently been devised that can be used with...aided subjects had to read a detailed tutorial, describing the algorithmic decomposition approach and why it should be used, The tutorial, shown in...information, it did so as a mechanical procedure, rather than contributed qualitatively to subjects’ comprehension . Fischhof4 & Bar-Hillel also tested

  1. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    NASA Astrophysics Data System (ADS)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  2. A recursive algorithm for the incomplete partial fraction decomposition

    NASA Astrophysics Data System (ADS)

    Laurie, Dirk P.

    1987-05-01

    Given polynomials P m+n-1, D m , and E n (where the subscript denotes degree), the incomplete partial fraction decomposition is equivalent to constructing polynomials Q n -1 and R m -1 such that P m+n-1= Q n-1 D m + E n R m-1. An elegant algorithm, designed for the case when m≪ n, was given by Henrici [ZAMP, 1971]. When this algorithm is applied to cases where m≅ n, it seems to suffer from numerical instability. The purpose of this paper is to explain the numerical instability, and to suggest a modified version of Henrici's algorithm in which the instability is substantially reduced. A numerical example is given.

  3. Implementation and performance of a domain decomposition algorithm in Sisal

    SciTech Connect

    DeBoni, T.; Feo, J.; Rodrigue, G.; Muller, J.

    1993-09-23

    Sisal is a general-purpose functional language that hides the complexity of parallel processing, expedites parallel program development, and guarantees determinacy. Parallelism and management of concurrent tasks are realized automatically by the compiler and runtime system. Spatial domain decomposition is a widely-used method that focuses computational resources on the most active, or important, areas of a domain. Many complex programming issues are introduced in paralleling this method including: dynamic spatial refinement, dynamic grid partitioning and fusion, task distribution, data distribution, and load balancing. In this paper, we describe a spatial domain decomposition algorithm programmed in Sisal. We explain the compilation process, and present the execution performance of the resultant code on two different multiprocessor systems: a multiprocessor vector supercomputer, and cache-coherent scalar multiprocessor.

  4. Genetic Algorithms, Floating Point Numbers and Applications

    NASA Astrophysics Data System (ADS)

    Hardy, Yorick; Steeb, Willi-Hans; Stoop, Ruedi

    The core in most genetic algorithms is the bitwise manipulations of bit strings. We show that one can directly manipulate the bits in floating point numbers. This means the main bitwise operations in genetic algorithm mutations and crossings are directly done inside the floating point number. Thus the interval under consideration does not need to be known in advance. For applications, we consider the roots of polynomials and finding solutions of linear equations.

  5. Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm

    SciTech Connect

    Yao, Y

    2008-02-08

    Semantic graphs have become key components in analyzing complex systems such as the Internet, or biological and social networks. These types of graphs generally consist of sparsely connected clusters or 'communities' whose nodes are more densely connected to each other than to other nodes in the graph. The identification of these communities is invaluable in facilitating the visualization, understanding, and analysis of large graphs by producing subgraphs of related data whose interrelationships can be readily characterized. Unfortunately, the ability of LLNL to effectively analyze the terabytes of multisource data at its disposal has remained elusive, since existing decomposition algorithms become computationally prohibitive for graphs of this size. We have addressed this limitation by developing more efficient algorithms for discerning community structure that can effectively process massive graphs. Current algorithms for detecting community structure, such as the high quality algorithm developed by Girvan and Newman [1], are only capable of processing relatively small graphs. The cubic complexity of Girvan and Newman, for example, makes it impractical for graphs with more than approximately 10{sup 4} nodes. Our goal for this project was to develop methodologies and corresponding algorithms capable of effectively processing graphs with up to 10{sup 9} nodes. From a practical standpoint, we expect the developed scalable algorithms to help resolve a variety of operational issues associated with the productive use of semantic graphs at LLNL. During FY07, we completed a graph clustering implementation that leverages a dynamic graph transformation to more efficiently decompose large graphs. In essence, our approach dynamically transforms the graph (or subgraphs) into a tree structure consisting of biconnected components interconnected by bridge links. This isomorphism allows us to compute edge betweenness, the chief source of inefficiency in Girvan and Newman

  6. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  7. Fast point cloud registration algorithm using multiscale angle features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Guo, Congling; Fang, Ying; Xia, Guihua; Wang, Wanjia; Elahi, Ahsan

    2017-05-01

    To fulfill the demands of rapid and real-time three-dimensional optical measurement, a fast point cloud registration algorithm using multiscale axis angle features is proposed. The key point is selected based on the mean value of scalar projections of the vectors from the estimated point to the points in the neighborhood on the normal of the estimated point. This method has a small amount of computation and good discriminating ability. A rotation invariant feature is proposed using the angle information calculated based on multiscale coordinate axis. The feature descriptor of a key point is computed using cosines of the angles between corresponding coordinate axes. Using this method, the surface information around key points is obtained sufficiently in three axes directions and it is easy to recognize. The similarity of descriptors is employed to quickly determine the initial correspondences. The rigid spatial distance invariance and clustering selection method are used to make the corresponding relationships more accurate and evenly distributed. Finally, the rotation matrix and translation vector are determined using the method of singular value decomposition. Experimental results show that the proposed algorithm has high precision, fast matching speed, and good antinoise capability.

  8. Fixed Point Implementations of Fast Kalman Algorithms.

    DTIC Science & Technology

    1983-11-01

    fined point multiply. ve &geete a meatn ’C.- nero. variance N random vector s~t) each time weAfilter is said to be 12 Scaled if udae 8(t+11t0 - 3-1* AS...nl.v by bl ’k rn.b.) 20 AST iA C T ’Cnnin to .- a , o. a ide It .,oco ea ry and Idenuty by block number) In this paper we study scaling rules and round...realized in a -fast form that uses the so-called fast Kalman gain algorithm. The algorithm for the gain is fixed point. Scaling rules and expressions for

  9. On the equivalence of a class of inverse decomposition algorithms for solving systems of linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A class of direct inverse decomposition algorithms for solving systems of linear equations is presented. Their behavior in the presence of round-off errors is analyzed. It is shown that under some mild restrictions on their implementation, the class of direct inverse decomposition algorithms presented are equivalent in terms of the error complexity measures.

  10. Singular value decomposition utilizing parallel algorithms on graphical processors

    SciTech Connect

    Kotas, Charlotte W; Barhen, Jacob

    2011-01-01

    One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements for a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder

  11. Vertical decomposition with Genetic Algorithm for Multiple Sequence Alignment.

    PubMed

    Naznin, Farhana; Sarker, Ruhul; Essam, Daryl

    2011-08-25

    Many Bioinformatics studies begin with a multiple sequence alignment as the foundation for their research. This is because multiple sequence alignment can be a useful technique for studying molecular evolution and analyzing sequence structure relationships. In this paper, we have proposed a Vertical Decomposition with Genetic Algorithm (VDGA) for Multiple Sequence Alignment (MSA). In VDGA, we divide the sequences vertically into two or more subsequences, and then solve them individually using a guide tree approach. Finally, we combine all the subsequences to generate a new multiple sequence alignment. This technique is applied on the solutions of the initial generation and of each child generation within VDGA. We have used two mechanisms to generate an initial population in this research: the first mechanism is to generate guide trees with randomly selected sequences and the second is shuffling the sequences inside such trees. Two different genetic operators have been implemented with VDGA. To test the performance of our algorithm, we have compared it with existing well-known methods, namely PRRP, CLUSTALX, DIALIGN, HMMT, SB_PIMA, ML_PIMA, MULTALIGN, and PILEUP8, and also other methods, based on Genetic Algorithms (GA), such as SAGA, MSA-GA and RBT-GA, by solving a number of benchmark datasets from BAliBase 2.0. The experimental results showed that the VDGA with three vertical divisions was the most successful variant for most of the test cases in comparison to other divisions considered with VDGA. The experimental results also confirmed that VDGA outperformed the other methods considered in this research.

  12. Vertical decomposition with Genetic Algorithm for Multiple Sequence Alignment

    PubMed Central

    2011-01-01

    Background Many Bioinformatics studies begin with a multiple sequence alignment as the foundation for their research. This is because multiple sequence alignment can be a useful technique for studying molecular evolution and analyzing sequence structure relationships. Results In this paper, we have proposed a Vertical Decomposition with Genetic Algorithm (VDGA) for Multiple Sequence Alignment (MSA). In VDGA, we divide the sequences vertically into two or more subsequences, and then solve them individually using a guide tree approach. Finally, we combine all the subsequences to generate a new multiple sequence alignment. This technique is applied on the solutions of the initial generation and of each child generation within VDGA. We have used two mechanisms to generate an initial population in this research: the first mechanism is to generate guide trees with randomly selected sequences and the second is shuffling the sequences inside such trees. Two different genetic operators have been implemented with VDGA. To test the performance of our algorithm, we have compared it with existing well-known methods, namely PRRP, CLUSTALX, DIALIGN, HMMT, SB_PIMA, ML_PIMA, MULTALIGN, and PILEUP8, and also other methods, based on Genetic Algorithms (GA), such as SAGA, MSA-GA and RBT-GA, by solving a number of benchmark datasets from BAliBase 2.0. Conclusions The experimental results showed that the VDGA with three vertical divisions was the most successful variant for most of the test cases in comparison to other divisions considered with VDGA. The experimental results also confirmed that VDGA outperformed the other methods considered in this research. PMID:21867510

  13. Spatial and angular domain decomposition algorithms for the curviliner S[sub N] transport theory method

    SciTech Connect

    Haghighat, A. )

    1993-01-01

    This paper surveys several domain decomposition algorithms developed for the 1-D and 2-D curvilinear S[sub N] transport theory methods. The angular and spatial domain decomposition algorithms are incorporated into TWOTRAN-11 (an S[sub N] production code) and the parallel performances of these algorithms are measured on the IBM 3090 multiprocessors, mainly in the [open quote]BATCH[close quote] mode. The measured parallel efficiencies for most test cases are greater than 60%. Further, it is demonstrated that the combined spatial and angular domain decomposition algorithms yield tens of independent tasks without significantly affecting solution convergence. 20 refs., 6 figs., 6 tabs.

  14. A parallel domain decomposition algorithm for coastal ocean circulation models based on integer linear programming

    NASA Astrophysics Data System (ADS)

    Jordi, Antoni; Georgas, Nickitas; Blumberg, Alan

    2017-05-01

    This paper presents a new parallel domain decomposition algorithm based on integer linear programming (ILP), a mathematical optimization method. To minimize the computation time of coastal ocean circulation models, the ILP decomposition algorithm divides the global domain in local domains with balanced work load according to the number of processors and avoids computations over as many as land grid cells as possible. In addition, it maintains the use of logically rectangular local domains and achieves the exact same results as traditional domain decomposition algorithms (such as Cartesian decomposition). However, the ILP decomposition algorithm may not converge to an exact solution for relatively large domains. To overcome this problem, we developed two ILP decomposition formulations. The first one (complete formulation) has no additional restriction, although it is impractical for large global domains. The second one (feasible) imposes local domains with the same dimensions and looks for the feasibility of such decomposition, which allows much larger global domains. Parallel performance of both ILP formulations is compared to a base Cartesian decomposition by simulating two cases with the newly created parallel version of the Stevens Institute of Technology's Estuarine and Coastal Ocean Model (sECOM). Simulations with the ILP formulations run always faster than the ones with the base decomposition, and the complete formulation is better than the feasible one when it is applicable. In addition, parallel efficiency with the ILP decomposition may be greater than one.

  15. An application of ranking function of fuzzy numbers to solve fuzzy revised simplex algorithm and fuzzy decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Rostamy-Malkhalifeh, Mohsen; Farajollahi, Homa

    2011-12-01

    The decomposition algorithm is the one of the method that have be applied for converting large scale problem into one or more smaller problem. In the condition of uncertainty, one way is using fuzzy linear programming(FLP) to solve this algorithm . For solving FLP problems we can use ranking functions of fuzzy numbers. in this paper, we use a new ranking function suggested by Hajarri[11], and propose a method for solving fuzzy revised simplex algorithm and then apply this algorithm to solve fuzzy decomposition algorithm, in the case of bounded space.

  16. An application of ranking function of fuzzy numbers to solve fuzzy revised simplex algorithm and fuzzy decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Rostamy-Malkhalifeh, Mohsen; Farajollahi, Homa

    2012-01-01

    The decomposition algorithm is the one of the method that have be applied for converting large scale problem into one or more smaller problem. In the condition of uncertainty, one way is using fuzzy linear programming(FLP) to solve this algorithm . For solving FLP problems we can use ranking functions of fuzzy numbers. in this paper, we use a new ranking function suggested by Hajarri[11], and propose a method for solving fuzzy revised simplex algorithm and then apply this algorithm to solve fuzzy decomposition algorithm, in the case of bounded space.

  17. An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation

    PubMed Central

    Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie

    2014-01-01

    In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912

  18. A Fast Iterated Conditional Modes Algorithm for Water-Fat Decomposition in MRI

    PubMed Central

    Huang, Fangping; Narayan, Sreenath; Wilson, David; Johnson, David; Zhang, Guo-Qiang

    2013-01-01

    Decomposition of water and fat in Magnetic Resonance Imaging (MRI) is important for biomedical research and clinical applications. In this paper, we propose a two-phased approach for the three-point water-fat decomposition problem. Our contribution consists of two components: (1) a background-masked Markov Random Field (MRF) energy model to formulate the local smoothness of field inhomogeneity; (2) a new Iterated Conditional Modes (ICM) algorithm accounting for high-performance optimization of the MRF energy model. The MRF energy model is integrated with background masking to prevent error propagation of background estimates as well as improve efficiency. The central component of our new ICM algorithm is the Stability Tracking (ST) mechanism intended to dynamically track iterative stability on pixels so that computation per iteration is performed only on instable pixels. The ST mechanism significantly improves the efficiency of ICM. We also develop a median-based initialization algorithm to provide good initial guesses for ICM iterations, and an adaptive gradient-based scheme for parametric configuration of the MRF model. We evaluate the robust of our approach with high-resolution mouse datasets acquired from 7-Tesla MRI. PMID:21402510

  19. Parallel and serial variational inequality decomposition algorithms for multicommodity market equilibrium problems

    SciTech Connect

    Nagurney, A.; Kim, D.S.

    1989-01-01

    The authors have applied parallel and serial variational inequality (VI) diagonal decomposition algorithms to large-scale multicommodity market equilibrium problems. These decomposition algorithms resolve the VI problems into single commodity problems, which are then solved as quadratic programming problems. The algorithms are implemented on an IBM 3090-600E, and randomly generated linear and nonlinear problems with as many as 100 markets and 12 commodities are solved. The computational results demonstrate that the parallel diagonal decomposition scheme is amenable to parallelization. This is the first time that multicommodity equilibrium problems of this scale and level of generality have been solved. Furthermore, this is the first study to compare the efficiencies of parallel and serial VI decomposition algorithms. Although the authors have selected as a prototype an equilibrium problem in economics, virtually any equilibrium problem can be formulated and studied as a variational inequality problem. Hence, their results are not limited to applications in economics and operations research.

  20. Genetic algorithms with decomposition procedures for multidimensional 0-1 knapsack problems with block angular structures.

    PubMed

    Kato, K; Sakawa, M

    2003-01-01

    This paper presents a detailed treatment of genetic algorithms with decomposition procedures as developed for large scale multidimensional 0-1 knapsack problems with block angular structures. Through the introduction of a triple string representation and the corresponding decoding algorithm, it is shown that a potential solution satisfying not only block constraints but also coupling constraints can be obtained for each individual. Then genetic algorithms with decomposition procedures are presented as an approximate solution method for multidimensional 0-1 knapsack problems with block angular structures. Many computational experiments on numerical examples with 30, 50, 70, 100, 150, 200, 300, 500, and 1000 variables demonstrate the feasibility and efficiency of the proposed method.

  1. Chaotic Visual Cryptosystem Using Empirical Mode Decomposition Algorithm for Clinical EEG Signals.

    PubMed

    Lin, Chin-Feng

    2016-03-01

    This paper, proposes a chaotic visual cryptosystem using an empirical mode decomposition (EMD) algorithm for clinical electroencephalography (EEG) signals. The basic design concept is to integrate two-dimensional (2D) chaos-based encryption scramblers, the EMD algorithm, and a 2D block interleaver method to achieve a robust and unpredictable visual encryption mechanism. Energy-intrinsic mode function (IMF) distribution features of the clinical EEG signal are developed for chaotic encryption parameters. The maximum and second maximum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the starting points of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. The minimum and second minimum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the security level parameters of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. Three EEG database, and seventeen clinical EEG signals were tested, and the average r and mse values are 0.0201 and 4.2626 × 10(-29), respectively, for the original and chaotically-encrypted through EMD clinical EEG signals. The chaotically-encrypted signal cannot be recovered if there is an error in the input parameters, for example, an initial point error of 0.000001 %. The encryption effects of the proposed chaotic EMD visual encryption mechanism are excellent.

  2. A study of the Invariant Subspace Decomposition Algorithm for banded symmetric matrices

    SciTech Connect

    Bischof, C.; Sun, X.; Tsao, A.; Turnbull, T.

    1994-06-01

    In this paper, we give an overview of the Invariant Subspace Decomposition Algorithm for banded symmetric matrices and describe a sequential implementation of this algorithm. Our implementation uses a specialized routine for performing banded matrix multiplication together with successive band reduction, yielding a sequential algorithm that is competitive for large problems with the LAPACK QR code in computing all of the eigenvalues and eigenvectors of a dense symmetric matrix. Performance results are given on a variety of machines.

  3. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  4. Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam

    2017-04-01

    The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.

  5. Determination of the Thermal Decomposition Products of Terephthalic Acid by Using Curie-Point Pyrolyzer

    NASA Astrophysics Data System (ADS)

    Begüm Elmas Kimyonok, A.; Ulutürk, Mehmet

    2016-04-01

    The thermal decomposition behavior of terephthalic acid (TA) was investigated by thermogravimetry/differential thermal analysis (TG/DTA) and Curie-point pyrolysis. TG/DTA analysis showed that TA is sublimed at 276°C prior to decomposition. Pyrolysis studies were carried out at various temperatures ranging from 160 to 764°C. Decomposition products were analyzed and their structures were determined by gas chromatography-mass spectrometry (GC-MS). A total of 11 degradation products were identified at 764°C, whereas no peak was observed below 445°C. Benzene, benzoic acid, and 1,1‧-biphenyl were identified as the major decomposition products, and other degradation products such as toluene, benzophenone, diphenylmethane, styrene, benzaldehyde, phenol, 9H-fluorene, and 9-phenyl 9H-fluorene were also detected. A pyrolysis mechanism was proposed based on the findings.

  6. Polarization demultiplexing by recursive least squares constant modulus algorithm based on QR decomposition

    NASA Astrophysics Data System (ADS)

    Ling, Zhao; Yeling, Wang; Guijun, Hu; Yunpeng, Cui; Jian, Shi; Li, Li

    2013-07-01

    Recursive least squares constant modulus algorithm based on QR decomposition (QR-RLS-CMA) is first proposed as the polarization demultiplexing method. We compare its performance with the stochastic gradient descent constant modulus algorithm (SGD-CMA) and the recursive least squares constant modulus algorithm (RLS-CMA) in a polarization-division-multiplexing system with coherent detection. It is demonstrated that QR-RLS-CMA is an efficient demultiplexing algorithm which can avoid the problem of step-length choice in SGD-CMA. Meanwhile, it also has better symbol error rate (SER) performance and more stable convergence property.

  7. Decomposition algorithms for stochastic programming on a computational grid.

    SciTech Connect

    Linderoth, J.; Wright, S.; Mathematics and Computer Science; Axioma Inc.

    2003-01-01

    We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems), and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample-average approximations of problems from the literature.

  8. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  9. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data

    NASA Astrophysics Data System (ADS)

    Clark, Darin P.; Badea, Cristian T.

    2014-10-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL-1), gold (0.9 mg mL-1), and gadolinium (2.9 mg mL-1) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  10. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-07

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  11. Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data

    PubMed Central

    Clark, Darin P.; Badea, Cristian T.

    2014-01-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  12. A domain decomposition algorithm for solving large elliptic problems

    SciTech Connect

    Nolan, M.P.

    1991-01-01

    AN algorithm which efficiently solves large systems of equations arising from the discretization of a single second-order elliptic partial differential equation is discussed. The global domain is partitioned into not necessarily disjoint subdomains which are traversed using the Schwarz Alternating Procedure. On each subdomain the multigrid method is used to advance the solution. The algorithm has the potential to decrease solution time when data is stored across multiple levels of a memory hierarchy. Results are presented for a virtual memory, vector multiprocessor architecture. A study of choice of inner iteration procedure and subdomain overlap is presented for a model problem, solved with two and four subdomains, sequentially and in parallel. Microtasking multiprocessing results are reported for multigrid on the Alliant FX-8 vector-multiprocessor. A convergence proof for a class of matrix splittings for the two-dimensional Helmholtz equation is given. 70 refs., 3 figs., 20 tabs.

  13. Trident: An FPGA Compiler Framework for Floating-Point Algorithms.

    SciTech Connect

    Tripp J. L.; Peterson, K. D.; Poznanovic, J. D.; Ahrens, C. M.; Gokhale, M.

    2005-01-01

    Trident is a compiler for floating point algorithms written in C, producing circuits in reconfigurable logic that exploit the parallelism available in the input description. Trident automatically extracts parallelism and pipelines loop bodies using conventional compiler optimizations and scheduling techniques. Trident also provides an open framework for experimentation, analysis, and optimization of floating point algorithms on FPGAs and the flexibility to easily integrate custom floating point libraries.

  14. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  15. Some domain decomposition algorithms for mixed formulations of elasticity and incompressible fluids.

    SciTech Connect

    Dohrmann, Clark R.

    2010-06-01

    In this talk, we present a collection of domain decomposition algorithms for mixed finite element formulations of elasticity and incompressible fluids. The key component of each of these algorithms is the coarse space. Here, the coarse spaces are obtained in an algebraic manner by harmonically extending coarse boundary data. Various aspects of the coarse spaces are discussed for both continuous and discontinuous interpolation of pressure. Further, both classical overlapping Schwarz and hybrid iterative substructuring preconditioners are described. Numerical results are presented for almost incompressible elasticity and the Navier Stokes equations which demonstrate the utility of the methods for both structured and irregular mesh decompositions. We also discuss a simple residual scaling approach which often leads to significant reductions in iterations for these algorithms.

  16. Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm

    PubMed Central

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565

  17. Inferring gene regulatory networks by singular value decomposition and gravitation field algorithm.

    PubMed

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms.

  18. A Parallel Domain Decomposition BEM Algorithm for Three Dimensional Exponentially Graded Elasticity

    SciTech Connect

    Ortiz Tavara, Jhonny E; Shelton Jr, William Allison; Mantic, Vladislav; Criado, Rafael; Paris, Federico; Gray, Leonard J

    2008-01-01

    A parallel domain decomposition boundary integral algorithm for three-dimensional exponentially graded elasticity has been developed. As this subdomain algorithm allows the grading direction to vary in the structure, geometries arising from practical FGM applications can be handled. Moreover, the boundary integral algorithm scales well with the number of processors, also helping to alleviate the high computational cost of evaluating the Green's function. Numerical results for cylindrical geometries show excellent agreement with the new analytical solution deduced for axisymmetric plane strain states in a radially graded material.

  19. An Automatic Filter Algorithm for Dense Image Matching Point Clouds

    NASA Astrophysics Data System (ADS)

    Dong, Y. Q.; Zhang, L.; Cui, X. M.; Ai, H. B.

    2017-09-01

    Although many filter algorithms have been presented over past decades, these algorithms are usually designed for the Lidar point clouds and can't separate the ground points from the DIM (dense image matching, DIM) point clouds derived from the oblique aerial images owing to the high density and variation of the DIM point clouds completely. To solve this problem, a new automatic filter algorithm is developed on the basis of adaptive TIN models. At first, the differences between Lidar and DIM point clouds which influence the filtering results are analysed in this paper. To avoid the influences of the plants which can't be penetrated by the DIM point clouds in the searching seed pointes process, the algorithm makes use of the facades of buildings to get ground points located on the roads as seed points and construct the initial TIN. Then a new densification strategy is applied to deal with the problem that the densification thresholds do not change as described in other methods in each iterative process. Finally, we use the DIM point clouds located in Potsdam produced by Photo-Scan to evaluate the method proposed in this paper. The experiment results show that the method proposed in this paper can not only separate the ground points from the DIM point clouds completely but also obtain the better filter results compared with TerraSolid. 1.

  20. Cross-comparison of three electromyogram decomposition algorithms assessed with experimental and simulated data.

    PubMed

    Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A

    2015-01-01

    The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy.

  1. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  2. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    SciTech Connect

    Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks.

  3. Algorithms to Reveal Properties of Floating-Point Arithmetic

    DTIC Science & Technology

    Two algorithms are presented in the form of Fortran subroutines. Each subroutine computes the radix and number of digits of the floating - point numbers...and whether rounding or chopping is done by the machine on which it is run. The methods are shown to work on any ’reasonable’ floating - point computer.

  4. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  5. Numerical study of variational data assimilation algorithms based on decomposition methods in atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Antokhin, Pavel

    2016-11-01

    The performance of a variational data assimilation algorithm for a transport and transformation model of atmospheric chemical composition is studied numerically in the case where the emission inventories are missing while there are additional in situ indirect concentration measurements. The algorithm is based on decomposition and splitting methods with a direct solution of the data assimilation problems at the splitting stages. This design allows avoiding iterative processes and working in real-time. In numerical experiments we study the sensitivity of data assimilation to measurement data quantity and quality.

  6. Decomposition-Based Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Social Networks

    PubMed Central

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806

  7. Decomposition-based multiobjective evolutionary algorithm for community detection in dynamic social networks.

    PubMed

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.

  8. A modified iterative closest point algorithm for shape registration

    NASA Astrophysics Data System (ADS)

    Tihonkih, Dmitrii; Makovetskii, Artyom; Kuznetsov, Vladislav

    2016-09-01

    The iterative closest point (ICP) algorithm is one of the most popular approaches to shape registration. The algorithm starts with two point clouds and an initial guess for a relative rigid-body transformation between them. Then it iteratively refines the transformation by generating pairs of corresponding points in the clouds and by minimizing a chosen error metric. In this work, we focus on accuracy of the ICP algorithm. An important stage of the ICP algorithm is the searching of nearest neighbors. We propose to utilize for this purpose geometrically similar groups of points. Groups of points of the first cloud, that have no similar groups in the second cloud, are not considered in further error minimization. To minimize errors, the class of affine transformations is used. The transformations are not rigid in contrast to the classical approach. This approach allows us to get a precise solution for transformations such as rotation, translation vector and scaling. With the help of computer simulation, the proposed method is compared with common nearest neighbor search algorithms for shape registration.

  9. Improved optimization algorithm for proximal point-based dictionary updating methods

    NASA Astrophysics Data System (ADS)

    Zhao, Changchen; Hwang, Wen-Liang; Lin, Chun-Liang; Chen, Weihai

    2016-09-01

    Proximal K-singular value decomposition (PK-SVD) is a dictionary updating algorithm that incorporates proximal point method into K-SVD. The attempt of combining proximal method and K-SVD has achieved promising result in such areas as sparse approximation, image denoising, and image compression. However, the optimization procedure of PK-SVD is complicated and, therefore, limits the algorithm in both theoretical analysis and practical use. This article proposes a simple but effective optimization approach to the formulation of PK-SVD. We cast this formulation as a fitting problem and relax the constraint on the direction of the k'th row in the sparse coefficient matrix. This relaxation strengthens the regularization effect of the proximal point. The proposed algorithm needs fewer steps to implement and further boost the performance of PK-SVD while maintaining the same computational complexity. Experimental results demonstrate that the proposed algorithm outperforms conventional algorithms in reconstruction error, recovery rate, and convergence speed for sparse approximation and achieves better results in image denoising.

  10. Compressive holography algorithm for the objects composed of point sources.

    PubMed

    Liu, Jing; Zhang, Guoxian; Zhao, Kai; Jiang, Xiaoyu

    2017-01-20

    A compressive holography algorithm is proposed for the objects composed of point sources in this work. The proposed algorithm is based on Gabor holography, an amazingly simple and effective encoder for compressed sensing. In the proposed algorithm, the three-dimensional sampling space is uniformly divided into a number of grids since the virtual object may appear anywhere in the sampling space. All the grids are mapped into an indication vector, which is sparse in nature considering that the number of grids occupied by the virtual object is far less than that of the whole sampling space. Consequently, the point source model can be represented in a compressed sensing framework. With the increase of the number of grids in the sampling space, the coherence of the sensing matrix gets higher, which does not guarantee a perfect reconstruction of the sparse vector with large probability. In this paper, a new algorithm named fast compact sensing matrix pursuit algorithm is proposed to cope with the high coherence problem, as well as the unknown sparsity. A similar compact sensing matrix with low coherence is constructed based on the original sensing matrix using similarity analysis. In order to tackle unknown sparsity, an orthogonal matching pursuit algorithm is utilized to calculate a rough estimate of the true support set, based on the similar compact sensing matrix and the measurement vector. The simulation and experimental results show that the proposed algorithm can efficiently reconstruct a sequence of 3D objects including a Stanford Bunny with complex shape.

  11. An infrared target detection algorithm based on lateral inhibition and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Li, Yun; Song, Yong; Zhao, Yufei; Zhao, Shangnan; Li, Xu; Li, Lin; Tang, Songyuan

    2017-09-01

    This paper proposes an infrared target detection algorithm based on lateral inhibition (LI) and singular value decomposition (SVD). Firstly, a local structure descriptor based on SVD of gradient domain is constructed, which reflects basic structures of the local regions of an infrared image. Then, LI network is modified by combining LI with the local structure descriptor for enhancing target and suppressing background. Meanwhile, to calculate lateral inhibition coefficients adaptively, the direction parameters are determined by the dominant orientations obtained from SVD. Experimental results show that, compared with the typical algorithms, the proposed algorithm not only can detect small target or area target under complex backgrounds, but also has excellent abilities of background suppression and target enhancement.

  12. A domain decomposition parallel processing algorithm for molecular dynamics simulations of polymers

    NASA Astrophysics Data System (ADS)

    Brown, David; Clarke, Julian H. R.; Okuda, Motoi; Yamazaki, Takao

    1994-10-01

    We describe in this paper a domain decomposition molecular dynamics algorithm for use on distributed memory parallel computers which is capable of handling systems containing rigid bond constraints and three- and four-body potentials as well as non-bonded potentials. The algorithm has been successfully implemented on the Fujitsu 1024 processor element AP1000 machine. The performance has been compared with and benchmarked against the alternative cloning method of parallel processing [D. Brown, J.H.R. Clarke, M. Okuda and T. Yamazaki, J. Chem. Phys., 100 (1994) 1684] and results obtained using other scalar and vector machines. Two parallel versions of the SHAKE algorithm, which solves the bond length constraints problem, have been compared with regard to optimising the performance of this procedure.

  13. a Review of Point Clouds Segmentation and Classification Algorithms

    NASA Astrophysics Data System (ADS)

    Grilli, E.; Menna, F.; Remondino, F.

    2017-02-01

    Today 3D models and point clouds are very popular being currently used in several fields, shared through the internet and even accessed on mobile phones. Despite their broad availability, there is still a relevant need of methods, preferably automatic, to provide 3D data with meaningful attributes that characterize and provide significance to the objects represented in 3D. Segmentation is the process of grouping point clouds into multiple homogeneous regions with similar properties whereas classification is the step that labels these regions. The main goal of this paper is to analyse the most popular methodologies and algorithms to segment and classify 3D point clouds. Strong and weak points of the different solutions presented in literature or implemented in commercial software will be listed and shortly explained. For some algorithms, the results of the segmentation and classification is shown using real examples at different scale in the Cultural Heritage field. Finally, open issues and research topics will be discussed.

  14. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  15. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  16. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  17. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  18. An Efficient Globally Optimal Algorithm for Asymmetric Point Matching.

    PubMed

    Lian, Wei; Zhang, Lei; Yang, Ming-Hsuan

    2016-08-29

    Although the robust point matching algorithm has been demonstrated to be effective for non-rigid registration, there are several issues with the adopted deterministic annealing optimization technique. First, it is not globally optimal and regularization on the spatial transformation is needed for good matching results. Second, it tends to align the mass centers of two point sets. To address these issues, we propose a globally optimal algorithm for the robust point matching problem where each model point has a counterpart in scene set. By eliminating the transformation variables, we show that the original matching problem is reduced to a concave quadratic assignment problem where the objective function has a low rank Hessian matrix. This facilitates the use of large scale global optimization techniques. We propose a branch-and-bound algorithm based on rectangular subdivision where in each iteration, multiple rectangles are used to increase the chances of subdividing the one containing the global optimal solution. In addition, we present an efficient lower bounding scheme which has a linear assignment formulation and can be efficiently solved. Extensive experiments on synthetic and real datasets demonstrate the proposed algorithm performs favorably against the state-of-the-art methods in terms of robustness to outliers, matching accuracy, and run-time.

  19. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  20. Optimizing the decomposition of soil moisture time-series data using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Kulkarni, C.; Mengshoel, O. J.; Basak, A.; Schmidt, K. M.

    2015-12-01

    The task of determining near-surface volumetric water content (VWC), using commonly available dielectric sensors (based upon capacitance or frequency domain technology), is made challenging due to the presence of "noise" such as temperature-driven diurnal variations in the recorded data. We analyzed a post-wildfire rainfall and runoff monitoring dataset for hazard studies in Southern California. VWC was measured with EC-5 sensors manufactured by Decagon Devices. Many traditional signal smoothing techniques such as moving averages, splines, and Loess smoothing exist. Unfortunately, when applied to our post-wildfire dataset, these techniques diminish maxima, introduce time shifts, and diminish signal details. A promising seasonal trend-decomposition procedure based on Loess (STL) decomposes VWC time series into trend, seasonality, and remainder components. Unfortunately, STL with its default parameters produces similar results as previously mentioned smoothing methods. We propose a novel method to optimize seasonal decomposition using STL with genetic algorithms. This method successfully reduces "noise" including diurnal variations while preserving maxima, minima, and signal detail. Better decomposition results for the post-wildfire VWC dataset were achieved by optimizing STL's control parameters using genetic algorithms. The genetic algorithms minimize an additive objective function with three weighted terms: (i) root mean squared error (RMSE) of straight line relative to STL trend line; (ii) range of STL remainder; and (iii) variance of STL remainder. Our optimized STL method, combining trend and remainder, provides an improved representation of signal details by preserving maxima and minima as compared to the traditional smoothing techniques for the post-wildfire rainfall and runoff monitoring data. This method identifies short- and long-term VWC seasonality and provides trend and remainder data suitable for forecasting VWC in response to precipitation.

  1. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space

  2. Communication: Active space decomposition with multiple sites: Density matrix renormalization group algorithm

    SciTech Connect

    Parker, Shane M.; Shiozaki, Toru

    2014-12-07

    We extend the active space decomposition method, recently developed by us, to more than two active sites using the density matrix renormalization group algorithm. The fragment wave functions are described by complete or restricted active-space wave functions. Numerical results are shown on a benzene pentamer and a perylene diimide trimer. It is found that the truncation errors in our method decrease almost exponentially with respect to the number of renormalization states M, allowing for numerically exact calculations (to a few μE{sub h} or less) with M = 128 in both cases. This rapid convergence is because the renormalization steps are used only for the interfragment electron correlation.

  3. Optimum and Heuristic Algorithms for Finite State Machine Decomposition and Partitioning

    DTIC Science & Technology

    1989-09-01

    The codes 3 . If thle uniut iple-vaited iniput part of a prinse imilllirault p E G1,. G2for ,oils and Sal caii be the Salle if and only if the codes...sub- .N nel.O)macgiuies aiid tile prototype isachinen. It canl be observed from Table 1 [81 ( 3 . D~ e Nliclmeli. H. K. BrAyton. And A. SAigi a iii...Heuristic Algorithms for Finite State Machine Decomposition and Partitioning Pravnav Ashar, Srinivas Devadas, and A. Richard Newton , T E ’,’ .,jpf~s’!i3

  4. Parrallel Implementation of Fast Randomized Algorithms for Low Rank Matrix Decomposition

    SciTech Connect

    Lucas, Andrew J.; Stalizer, Mark; Feo, John T.

    2014-03-01

    We analyze the parallel performance of randomized interpolative decomposition by de- composing low rank complex-valued Gaussian random matrices larger than 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested.

  5. Fast Domain Decomposition Algorithm for Continuum Solvation Models: Energy and First Derivatives.

    PubMed

    Lipparini, Filippo; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Mennucci, Benedetta

    2013-08-13

    In this contribution, an efficient, parallel, linear scaling implementation of the conductor-like screening model (COSMO) is presented, following the domain decomposition (dd) algorithm recently proposed by three of us. The implementation is detailed and its linear scaling properties, both in computational cost and memory requirements, are demonstrated. Such behavior is also confirmed by several numerical examples on linear and globular large-sized systems, for which the calculation of the energy and of the forces is achieved with timings compatible with the use of polarizable continuum solvation for molecular dynamics simulations.

  6. Communication: Active space decomposition with multiple sites: density matrix renormalization group algorithm.

    PubMed

    Parker, Shane M; Shiozaki, Toru

    2014-12-07

    We extend the active space decomposition method, recently developed by us, to more than two active sites using the density matrix renormalization group algorithm. The fragment wave functions are described by complete or restricted active-space wave functions. Numerical results are shown on a benzene pentamer and a perylene diimide trimer. It is found that the truncation errors in our method decrease almost exponentially with respect to the number of renormalization states M, allowing for numerically exact calculations (to a few μE(h) or less) with M = 128 in both cases. This rapid convergence is because the renormalization steps are used only for the interfragment electron correlation.

  7. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems

  8. Parallel two-level domain decomposition based Jacobi-Davidson algorithms for pyramidal quantum dot simulation

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan

    2016-07-01

    We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.

  9. Efficient detection and recognition algorithm of reference points in photogrammetry

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Liu, Gang; Zhu, Lichun; Li, Xiaofeng; Zhang, Yuhai; Shan, Siyu

    2016-04-01

    In photogrammetry, an approach of automatic detection and recognition on reference points have been proposed to meet the requirements on detection and matching of reference points. The reference points used here are the CCT(circular coded target), which compose of two parts: the round target point in central region and the circular encoding band in surrounding region. Firstly, the contours of image are extracted, after that noises and disturbances of the image are filtered out by means of a series of criteria, such as the area of the contours, the correlation coefficient between two regions of contours etc. Secondly, the cubic spline interpolation is adopted to process the central contour region of the CCT. The contours of the interpolated image are extracted again, then the least square ellipse fitting is performed to calculate the center coordinates of the CCT. Finally, the encoded value is obtained by the angle information from the circular encoding band of the CCT. From the experiment results, the location precision of the CCT can be achieved to sub-pixel level of the algorithm presented. Meanwhile the recognition accuracy is pretty high, even if the background of the image is complex and full of disturbances. In addition, the property of the algorithm is robust. Furthermore, the runtime of the algorithm is fast.

  10. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  11. An improved algorithm for balanced proper orthogonal decomposition using analytic tails

    NASA Astrophysics Data System (ADS)

    Tu, Jonathan; Rowley, Clarence

    2012-11-01

    Balanced proper orthogonal decomposition (BPOD) can be used in flow control applications to identify coherent structures of interest and to form reduced-order models. Doing so involves simulating impulse responses of the direct and adjoint systems, in order to compute factorizations of the empirical Gramians. We present a new variant of the BPOD algorithm that simultaneously reduces its computational cost and increases its accuracy. Dynamic mode decomposition (DMD) is used to identify the slow eigenvectors that dominate the long-time behavior of the impulse responses, and the contribution of these eigenvectors to the empirical Gramians is then accounted for analytically. This procedure greatly reduces the error inherent in truncating the impulse responses after a finite time. We demonstrate the effectiveness of this algorithm by applying it to the flow past a two-dimensional cylinder, at a Reynolds number of 100. Reduced-order models are computed for the restriction of the wake dynamics to the stable subspace. Models generated using the analytic tail method yield the same accuracy as those computed using traditional BPOD, with a 70% reduction in computation time. Supported by AFOSR grant FA9550-09-1-0257, NSF GRFP.

  12. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation

  13. Algorithm for astronomical, point source, signal to noise ratio calculations

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Schroeder, D. J.

    1984-01-01

    An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.

  14. A Novel Approach to Multiple Sequence Alignment Using Multiobjective Evolutionary Algorithm Based on Decomposition.

    PubMed

    Zhu, Huazheng; He, Zhongshi; Jia, Yuanyuan

    2016-03-01

    Multiple sequence alignment (MSA) is a fundamental and key step for implementing other tasks in bioinformatics, such as phylogenetic analyses, identification of conserved motifs and domains, structure prediction, etc. Despite the fact that there are many methods to implement MSA, biologically perfect alignment approaches are not found hitherto. This paper proposes a novel idea to perform MSA, where MSA is treated as a multiobjective optimization problem. A famous multiobjective evolutionary algorithm framework based on decomposition is applied for solving MSA, named MOMSA. In the MOMSA algorithm, we develop a new population initialization method and a novel mutation operator. We compare the performance of MOMSA with several alignment methods based on evolutionary algorithms, including VDGA, GAPAM, and IMSA, and also with state-of-the-art progressive alignment approaches, such as MSAprobs, Probalign, MAFFT, Procons, Clustal omega, T-Coffee, Kalign2, MUSCLE, FSA, Dialign, PRANK, and CLUSTALW. These alignment algorithms are tested on benchmark datasets BAliBASE 2.0 and BAliBASE 3.0. Experimental results show that MOMSA can obtain the significantly better alignments than VDGA, GAPAM on the most of test cases by statistical analyses, produce better alignments than IMSA in terms of TC scores, and also indicate that MOMSA is comparable with the leading progressive alignment approaches in terms of quality of alignments.

  15. Electrocardiogram Signal Denoising Using Extreme-Point Symmetric Mode Decomposition and Nonlocal Means

    PubMed Central

    Tian, Xiaoying; Li, Yongshuai; Zhou, Huan; Li, Xiang; Chen, Lisha; Zhang, Xuming

    2016-01-01

    Electrocardiogram (ECG) signals contain a great deal of essential information which can be utilized by physicians for the diagnosis of heart diseases. Unfortunately, ECG signals are inevitably corrupted by noise which will severely affect the accuracy of cardiovascular disease diagnosis. Existing ECG signal denoising methods based on wavelet shrinkage, empirical mode decomposition and nonlocal means (NLM) cannot provide sufficient noise reduction or well-detailed preservation, especially with high noise corruption. To address this problem, we have proposed a hybrid ECG signal denoising scheme by combining extreme-point symmetric mode decomposition (ESMD) with NLM. In the proposed method, the noisy ECG signals will first be decomposed into several intrinsic mode functions (IMFs) and adaptive global mean using ESMD. Then, the first several IMFs will be filtered by the NLM method according to the frequency of IMFs while the QRS complex detected from these IMFs as the dominant feature of the ECG signal and the remaining IMFs will be left unprocessed. The denoised IMFs and unprocessed IMFs are combined to produce the final denoised ECG signals. Experiments on both simulated ECG signals and real ECG signals from the MIT-BIH database demonstrate that the proposed method can suppress noise in ECG signals effectively while preserving the details very well, and it outperforms several state-of-the-art ECG signal denoising methods in terms of signal-to-noise ratio (SNR), root mean squared error (RMSE), percent root mean square difference (PRD) and mean opinion score (MOS) error index. PMID:27681729

  16. Research on Loran-C Sky Wave Delay Estimation Using Eigen-decomposition Algorithm

    NASA Astrophysics Data System (ADS)

    Xiong, W.; Hu, Y. H.; Liang, Q.

    2009-04-01

    A novel signal processing technique using the Eigenvector algorithm for estimating sky wave delays in Loran - C receiver has been presented in this paper. This provides the basis on which to design a Loran-C receiver capable of adjusting its sampling point adaptively to the optimal value. The performance of this sky wave delay on the estimation accuracy of the algorithm is studied and compared with IFFT technique. Simulation results show that this algorithm clearly provides better resolution and sharper peaks than the IFFT. Finally, experiment results using off-air data confirm these conclusions.

  17. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features

    PubMed Central

    Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-01-01

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value. PMID:28800096

  18. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features.

    PubMed

    He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-08-11

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.

  19. Multimode algorithm for detection and tracking of point targets

    NASA Astrophysics Data System (ADS)

    Venkateswarlu, Ronda; Er, Meng H.; Deshpande, Suyog D.; Chan, Philip

    1999-07-01

    This paper deals with the problem of detection and tracking of point-targets from a sequence of IR images against slowly moving clouds as well as structural background. Many algorithms are reported in the literature for tracking sizeable targets with good result. However, the difficulties in tracking point-targets arise from the fact that they are not easily discernible from point like clutter. Though the point-targets are moving, it is very difficult to detect and track them with reduced false alarm rates, because of the non-stationary of the IR clutter, changing target statistics and sensor motion. The focus of research in this area is to reduce false alarm rate to an acceptable level. In certain situations not detecting a true target is acceptable, but declaring a false target as a true one may not be acceptable. Although, there are many approaches to tackle this problem, no single method works well in all the situations. In this paper, we present a multi-mode algorithm involving scene stabilization using image registration, 2D spatial filtering based on continuous wavelet transform, adaptive threshold, accumulation of the threshold frames and processing of the accumulated frame to get the final target trajectories. It is assumed that most of the targets occupy a couple of pixels. Head-on moving and maneuvering targets are not considered. It has been tested successfully with the available database and the results are presented.

  20. Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2004-01-01

    This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.

  1. Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2004-01-01

    This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.

  2. Face pose tracking using the four-point algorithm

    NASA Astrophysics Data System (ADS)

    Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen

    2017-06-01

    In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.

  3. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  4. An improved infrared image enhancement algorithm based on multi-scale decomposition

    NASA Astrophysics Data System (ADS)

    Zhang, Honghui; Luo, Haibo; Yu, Xin-rong; Ding, Qing-hai

    2014-11-01

    Due to the restriction of infrared imaging component and the radiation of atmosphere, infrared images are discontented with image contrast, blurry, large yawp. Aimed on these problems, a multi-scale image enhancement algorithm is proposed. The main principle is as follows: firstly, On the basis of the multi-scale image decomposition, We use an edge-preserving spatial filter that instead of the Gaussion filter proposed in the original version, adjust the scale-dependent factor With a weighted information. Secondly, contrast is equalized by applying nonlinear amplification. Thirdly, subband image is the weighted sum of sampled subband image and subsampled then upsampled subband image by a factor of two. Finally, Image reconstruction was applied. Experiment results show that the proposed method can enhance the original infrared image effectively and improve the contrast, moreover, it also can reserve the details and edges of the image well.

  5. New Advances in the Study of the Proximal Point Algorithm

    NASA Astrophysics Data System (ADS)

    Moroşanu, Gheorghe

    2010-09-01

    Consider in a real Hilbert space H the inexact, Halpern-type, proximal point algorithm xn+1 = αnu+(1-αn)Jβnxn+en, n = 0,1,…, (H—PPA) where u, x∈H are given points, Jβn = (I+βna) for a given maximal monotone operator A, and (en) is the error sequence, under new assumptions on αn∈(0,1) and βn∈(0,1). Several strong convergence results for the H—PPA are presented under the general condition that the error sequence converges strongly to zero, thus improving the classical Rockafellar's summability condition on (‖en‖) that has been extensively used so far for different versions of the proximal point algorithm. Our results extend and improve some recent ones. These results can be applied to approximate minimizers of convex functionals. Convergence rate estimates are established for a sequence approximating the minimum value of such a functional.

  6. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  7. Parallel algorithm for computing points on a computation front hyperplane

    NASA Astrophysics Data System (ADS)

    Krasnov, M. M.

    2015-01-01

    A parallel algorithm for computing points on a computation front hyperplane is described. This task arises in the computation of a quantity defined on a multidimensional rectangular domain. Three-dimensional domains are usually discussed, but the material is given in the general form when the number of measurements is at least two. When the values of a quantity at different points are internally independent (which is frequently the case), the corresponding computations are independent as well and can be performed in parallel. However, if there are internal dependences (as, for example, in the Gauss-Seidel method for systems of linear equations), then the order of scanning points of the domain is an important issue. A conventional approach in this case is to form a computation front hyperplane (a usual plane in the three-dimensional case and a line in the two-dimensional case) that moves linearly across the domain at a certain angle. At every step in the course of motion of this hyperplane, its intersection points with the domain can be treated independently and, hence, in parallel, but the steps themselves are executed sequentially. At different steps, the intersection of the hyperplane with the entire domain can have a rather complex geometry and the search for all points of the domain lying on the hyperplane at a given step is a nontrivial problem. This problem (i.e., the computation of the coordinates of points lying in the intersection of the domain with the hyperplane at a given step in the course of hyperplane motion) is addressed below. The computations over the points of the hyperplane can be executed in parallel.

  8. A novel algorithm for generating libration point orbits about the collinear points

    NASA Astrophysics Data System (ADS)

    Ren, Yuan; Shan, Jinjun

    2014-09-01

    This paper presents a numerical algorithm that can generate long-term libration points orbits (LPOs) and the transfer orbits from the parking orbits to the LPOs in the circular-restricted three-body problem (CR3BP) and the full solar system model without initial guesses. The families of the quasi-periodic LPOs in the CR3BP can also be constructed with this algorithm. By using the dynamical behavior of LPO, the transfer orbit from the parking orbit to the LPO is generated using a bisection method. At the same time, a short segment of the target LPO connected with the transfer orbit is obtained, then the short segment of LPO is extended by correcting the state towards its adjacent point on the stable manifold of the target LPO with differential evolution algorithm. By implementing the correction strategy repeatedly, the LPOs can be extended to any length as needed. Moreover, combining with the continuation procedure, this algorithm can be used to generate the families of the quasi-periodic LPOs in the CR3BP.

  9. A Three-level BDDC algorithm for saddle point problems

    SciTech Connect

    Tu, X.

    2008-12-10

    BDDC algorithms have previously been extended to the saddle point problems arising from mixed formulations of elliptic and incompressible Stokes problems. In these two-level BDDC algorithms, all iterates are required to be in a benign space, a subspace in which the preconditioned operators are positive definite. This requirement can lead to large coarse problems, which have to be generated and factored by a direct solver at the beginning of the computation and they can ultimately become a bottleneck. An additional level is introduced in this paper to solve the coarse problem approximately and to remove this difficulty. This three-level BDDC algorithm keeps all iterates in the benign space and the conjugate gradient methods can therefore be used to accelerate the convergence. This work is an extension of the three-level BDDC methods for standard finite element discretization of elliptic problems and the same rate of convergence is obtained for the mixed formulation of the same problems. Estimate of the condition number for this three-level BDDC methods is provided and numerical experiments are discussed.

  10. Algorithm for detecting important changes in lidar point clouds

    NASA Astrophysics Data System (ADS)

    Korchev, Dmitriy; Owechko, Yuri

    2014-06-01

    Protection of installations in hostile environments is a very critical part of military and civilian operations that requires a significant amount of security personnel to be deployed around the clock. Any electronic change detection system for detection of threats must have high probability of detection and low false alarm rates to be useful in the presence of natural motion of trees and vegetation due to wind. We propose a 3D change detection system based on a LIDAR sensor that can reliably and robustly detect threats and intrusions in different environments including surrounding trees, vegetation, and other natural landscape features. Our LIDAR processing algorithm finds human activity and human-caused changes not only in open spaces but also in heavy vegetated areas hidden from direct observation by 2D imaging sensors. The algorithm processes a sequence of point clouds called frames. Every 3D frame is mapped into a 2D horizontal rectangular grid. Each cell of this grid is processed to calculate the distribution of the points mapped into it. The spatial differences are detected by analyzing the differences in distributions of the corresponding cells that belong to different frames. Several heuristic filters are considered to reduce false detections caused by natural changes in the environment.

  11. New point matching algorithm for panoramic reflectance images

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong; Zlatanova, Sisi

    2007-11-01

    Much attention is paid to registration of terrestrial point clouds nowadays. Research is carried out towards improved efficiency and automation of the registration process. The most important part of registration is finding correspondence. The panoramic reflectance images are generated according to the angular coordinates and reflectance value of each 3D point of 360° full scans. Since it is similar to a black and white photo, it is possible to implement image matching on this kind of images. Therefore, this paper reports a new corresponding point matching algorithm for panoramic reflectance images. Firstly SIFT (Scale Invariant Feature Transform) method is employed for extracting distinctive invariant features from panoramic images that can be used to perform reliable matching between different views of an object or scene. The correspondences are identified by finding the nearest neighbors of each keypoint form the first image among those in the second image afterwards. The rigid geometric invariance derived from point cloud is used to prune false correspondences. Finally, an iterative process is employed to include more new matches for transformation parameters computation until the computation accuracy reaches predefined accuracy threshold. The approach is tested with panoramic reflectance images (indoor and outdoor scenes) acquired by the laser scanner FARO LS 880. 1

  12. High-stability algorithm for the three-pattern decomposition of global atmospheric circulation

    NASA Astrophysics Data System (ADS)

    Cheng, Jianbo; Gao, Chenbin; Hu, Shujuan; Feng, Guolin

    2017-07-01

    In order to study the atmospheric circulation from a global-wide perspective, the three-pattern decomposition of global atmospheric circulation (TPDGAC) has been proposed in our previous studies. In this work, to easily and accurately apply the TPDGAC in the diagnostic analysis of atmospheric circulation, a high-stability algorithm of the TPDGAC has been presented. By using the TPDGAC, the global atmospheric circulation is decomposed into the three-dimensional (3D) horizontal, meridional, and zonal circulations (three-pattern circulations). In particular, the global zonal mean meridional circulation is essentially the three-cell meridional circulation. To demonstrate the rationality and correctness of the proposed numerical algorithm, the climatology of the three-pattern circulations and the evolution characteristics of the strength and meridional width of the Hadley circulation during 1979-2015 have been investigated using five reanalysis datasets. Our findings reveal that the three-pattern circulations capture the main features of the Rossby, Hadley, and Walker circulations. The Hadley circulation shows a significant intensification during boreal winter in the Northern Hemisphere and shifts significantly poleward during boreal (austral) summer and autumn in the Northern (Southern) Hemisphere.

  13. Enhancement of lung sounds based on empirical mode decomposition and Fourier transform algorithm.

    PubMed

    Mondal, Ashok; Banerjee, Poulami; Somkuwar, Ajay

    2017-02-01

    There is always heart sound (HS) signal interfering during the recording of lung sound (LS) signals. This obscures the features of LS signals and creates confusion on pathological states, if any, of the lungs. In this work, a new method is proposed for reduction of heart sound interference which is based on empirical mode decomposition (EMD) technique and prediction algorithm. In this approach, first the mixed signal is split into several components in terms of intrinsic mode functions (IMFs). Thereafter, HS-included segments are localized and removed from them. The missing values of the gap thus produced, is predicted by a new Fast Fourier Transform (FFT) based prediction algorithm and the time domain LS signal is reconstructed by taking an inverse FFT of the estimated missing values. The experiments have been conducted on simulated and recorded HS corrupted LS signals at three different flow rates and various SNR levels. The performance of the proposed method is evaluated by qualitative and quantitative analysis of the results. It is found that the proposed method is superior to the baseline method in terms of quantitative and qualitative measurement. The developed method gives better results compared to baseline method for different SNR levels. Our method gives cross correlation index (CCI) of 0.9488, signal to deviation ratio (SDR) of 9.8262, and normalized maximum amplitude error (NMAE) of 26.94 for 0 dB SNR value. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  15. A maximum power point tracking algorithm for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  16. A fast algorithm based on the domain decomposition method for scattering analysis of electrically large objects

    NASA Astrophysics Data System (ADS)

    Yin, Lei; Hong, Wei

    2002-01-01

    By combining the finite difference (FD) method with the domain decomposition method (DDM), a fast and rigorous algorithm is presented in this paper for the scattering analysis of extremely large objects. Unlike conventional methods, such as the method of moments (MOM) and FD method, etc., the new algorithm decomposes an original large domain into small subdomains and chooses the most efficient method to solve the electromagnetic (EM) equations on each subdomain individually. Therefore the computational complexity and scale are substantially reduced. The iterative procedure of the algorithm and the implementation of virtual boundary conditions are discussed in detail. During scattering analysis of an electrically large cylinder, the conformal band computational domain along the circumference of the cylinder is decomposed into sections, which results in a series of band matrices with very narrow band. Compared with the traditional FD method, it decreases the consumption of computer memory and CPU time from O(N2) to O(N/m) and O(N), respectively, where m is the number of subdomains and Nis the number of nodes or unknowns. Furthermore, this method can be easily applied for the analysis of arbitrary shaped cylinders because the subdomains can be divided into any possible form. On the other hand, increasing the number of subdomains will hardly increase the computing time, which makes it possible to analyze the EM scattering problems of extremely large cylinders only on a PC. The EM scattering by two-dimensional cylinders with maximum perimeter of 100,000 wavelengths is analyzed. Moreover, this method is very suitable for parallel computation, which can further promote the computational efficiency.

  17. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    PubMed

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).

  18. A PARALIND Decomposition-Based Coherent Two-Dimensional Direction of Arrival Estimation Algorithm for Acoustic Vector-Sensor Arrays

    PubMed Central

    Zhang, Xiaofei; Zhou, Min; Li, Jianfeng

    2013-01-01

    In this paper, we combine the acoustic vector-sensor array parameter estimation problem with the parallel profiles with linear dependencies (PARALIND) model, which was originally applied to biology and chemistry. Exploiting the PARALIND decomposition approach, we propose a blind coherent two-dimensional direction of arrival (2D-DOA) estimation algorithm for arbitrarily spaced acoustic vector-sensor arrays subject to unknown locations. The proposed algorithm works well to achieve automatically paired azimuth and elevation angles for coherent and incoherent angle estimation of acoustic vector-sensor arrays, as well as the paired correlated matrix of the sources. Our algorithm, in contrast with conventional coherent angle estimation algorithms such as the forward backward spatial smoothing (FBSS) estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, not only has much better angle estimation performance, even for closely-spaced sources, but is also available for arbitrary arrays. Simulation results verify the effectiveness of our algorithm. PMID:23604030

  19. Nondyadic decomposition algorithm with Meyer's wavelet packets: an application to EEG signal

    NASA Astrophysics Data System (ADS)

    Carre, Philippe; Richard, Noel; Fernandez-Maloigne, Christine; Paquereau, Joel

    1999-10-01

    In this paper, we propose an original decomposition scheme based on Meyer's wavelets. In opposition to a classical technique of wavelet packet analysis, the decomposition is an adaptative segmentation of the frequential axis which does not use a filters bank. This permits a higher flexibility in the band frequency definition. The decomposition computes all possible partitions from a sequential space: it does not only compute those that come from a dyadic decomposition. Our technique is applied on the electroencephalogram signal; here the purpose is to extract a best basis of frequential decomposition. This study is part of a multimodal functional cerebral imagery project.

  20. Parallel data-driven decomposition algorithm for large-scale datasets: with application to transitional boundary layers

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Schmid, Peter J.

    2016-10-01

    Many fluid flows of engineering interest, though very complex in appearance, can be approximated by low-order models governed by a few modes, able to capture the dominant behavior (dynamics) of the system. This feature has fueled the development of various methodologies aimed at extracting dominant coherent structures from the flow. Some of the more general techniques are based on data-driven decompositions, most of which rely on performing a singular value decomposition (SVD) on a formulated snapshot (data) matrix. The amount of experimentally or numerically generated data expands as more detailed experimental measurements and increased computational resources become readily available. Consequently, the data matrix to be processed will consist of far more rows than columns, resulting in a so-called tall-and-skinny (TS) matrix. Ultimately, the SVD of such a TS data matrix can no longer be performed on a single processor, and parallel algorithms are necessary. The present study employs the parallel TSQR algorithm of (Demmel et al. in SIAM J Sci Comput 34(1):206-239, 2012), which is further used as a basis of the underlying parallel SVD. This algorithm is shown to scale well on machines with a large number of processors and, therefore, allows the decomposition of very large datasets. In addition, the simplicity of its implementation and the minimum required communication makes it suitable for integration in existing numerical solvers and data decomposition techniques. Examples that demonstrate the capabilities of highly parallel data decomposition algorithms include transitional processes in compressible boundary layers without and with induced flow separation.

  1. An Algorithm for Projecting Points onto a Patched CAD Model

    SciTech Connect

    Henshaw, W D

    2001-05-29

    We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.

  2. Experimental Design for Groundwater Pumping Estimation Using a Genetic Algorithm (GA) and Proper Orthogonal Decomposition (POD)

    NASA Astrophysics Data System (ADS)

    Siade, A. J.; Cheng, W.; Yeh, W. W.

    2010-12-01

    This study optimizes observation well locations and sampling frequencies for the purpose of estimating unknown groundwater extraction in an aquifer system. Proper orthogonal decomposition (POD) is used to reduce the groundwater flow model, thus reducing the computation burden and data storage space associated with solving this problem for heavily discretized models. This reduced model can store a significant amount of system information in a much smaller reduced state vector. Along with the sensitivity equation method, the proposed approach can efficiently compute the Jacobian matrix that forms the information matrix associated with the experimental design. The criterion adopted for experimental design is the maximization of the trace of the weighted information matrix. Under certain conditions, this is equivalent to the classical A-optimality criterion established in experimental design. A genetic algorithm (GA) is used to optimize the observation well locations and sampling frequencies for maximizing the collected information from the hydraulic head sampling at the observation wells. We applied the proposed approach to a hypothetical 30,000-node groundwater aquifer system. We studied the relationship among the number of observation wells, observation well locations, sampling frequencies, and the collected information for estimating unknown groundwater extraction.

  3. Algorithm of the automated choice of points of the acupuncture for EHF-therapy

    NASA Astrophysics Data System (ADS)

    Lyapina, E. P.; Chesnokov, I. A.; Anisimov, Ya. E.; Bushuev, N. A.; Murashov, E. P.; Eliseev, Yu. Yu.; Syuzanna, H.

    2007-05-01

    Offered algorithm of the automated choice of points of the acupuncture for EHF-therapy. The recipe formed by algorithm of an automated choice of points for acupunctural actions has a recommendational character. Clinical investigations showed that application of the developed algorithm in EHF-therapy allows to normalize energetic state of the meridians and to effectively solve many problems of an organism functioning.

  4. Asymptotic behavior of two algorithms for solving common fixed point problems

    NASA Astrophysics Data System (ADS)

    Zaslavski, Alexander J.

    2017-04-01

    The common fixed point problem is to find a common fixed point of a finite family of mappings. In the present paper our goal is to obtain its approximate solution using two perturbed algorithms. The first algorithm is an iterative method for problems in a metric space while the second one is a dynamic string-averaging algorithms for problems in a Hilbert space.

  5. The implement of Talmud property allocation algorithm based on graphic point-segment way

    NASA Astrophysics Data System (ADS)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  6. A patch-based tensor decomposition algorithm for M-FISH image classification.

    PubMed

    Wang, Min; Huang, Ting-Zhu; Li, Jingyao; Wang, Yu-Ping

    2017-06-01

    Multiplex-fluorescence in situ hybridization (M-FISH) is a chromosome imaging technique which can be used to detect chromosomal abnormalities such as translocations, deletions, duplications, and inversions. Chromosome classification from M-FISH imaging data is a key step to implement the technique. In the classified M-FISH image, each pixel in a chromosome is labeled with a class index and drawn with a pseudo-color so that geneticists can easily conduct diagnosis, for example, identifying chromosomal translocations by examining color changes between chromosomes. However, the information of pixels in a neighborhood is often overlooked by existing approaches. In this work, we assume that the pixels in a patch belong to the same class and use the patch to represent the center pixel's class information, by which we can use the correlations of neighboring pixels and the structural information across different spectral channels for the classification. On the basis of assumption, we propose a patch-based classification algorithm by using higher order singular value decomposition (HOSVD). The developed method has been tested on a comprehensive M-FISH database that we established, demonstrating improved performance. When compared with other pixel-wise M-FISH image classifiers such as fuzzy c-means clustering (FCM), adaptive fuzzy c-means clustering (AFCM), improved adaptive fuzzy c-means clustering (IAFCM), and sparse representation classification (SparseRC) methods, the proposed method gave the highest correct classification ratio (CCR), which can translate into improved diagnosis of genetic diseases and cancers. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  7. Validation of the pulse decomposition analysis algorithm using central arterial blood pressure

    PubMed Central

    2014-01-01

    Background There is a significant need for continuous noninvasive blood pressure (cNIBP) monitoring, especially for anesthetized surgery and ICU recovery. cNIBP systems could lower costs and expand the use of continuous blood pressure monitoring, lowering risk and improving outcomes. The test system examined here is the CareTaker® and a pulse contour analysis algorithm, Pulse Decomposition Analysis (PDA). PDA’s premise is that the peripheral arterial pressure pulse is a superposition of five individual component pressure pulses that are due to the left ventricular ejection and reflections and re-reflections from only two reflection sites within the central arteries. The hypothesis examined here is that the model’s principal parameters P2P1 and T13 can be correlated with, respectively, systolic and pulse pressures. Methods Central arterial blood pressures of patients (38 m/25 f, mean age: 62.7 y, SD: 11.5 y, mean height: 172.3 cm, SD: 9.7 cm, mean weight: 86.8 kg, SD: 20.1 kg) undergoing cardiac catheterization were monitored using central line catheters while the PDA parameters were extracted from the arterial pulse signal obtained non-invasively using CareTaker system. Results Qualitative validation of the model was achieved with the direct observation of the five component pressure pulses in the central arteries using central line catheters. Statistically significant correlations between P2P1 and systole and T13 and pulse pressure were established (systole: R square: 0.92 (p < 0.0001), diastole: R square: 0.78 (p < 0.0001). Bland-Altman comparisons between blood pressures obtained through the conversion of PDA parameters to blood pressures of non-invasively obtained pulse signatures with catheter-obtained blood pressures fell within the trend guidelines of the Association for the Advancement of Medical Instrumentation SP-10 standard (standard deviation: 8 mmHg(systole: 5.87 mmHg, diastole: 5.69 mmHg)). Conclusions The results indicate that arterial

  8. Validation of the pulse decomposition analysis algorithm using central arterial blood pressure.

    PubMed

    Baruch, Martin C; Kalantari, Kambiz; Gerdt, David W; Adkins, Charles M

    2014-07-08

    There is a significant need for continuous noninvasive blood pressure (cNIBP) monitoring, especially for anesthetized surgery and ICU recovery. cNIBP systems could lower costs and expand the use of continuous blood pressure monitoring, lowering risk and improving outcomes.The test system examined here is the CareTaker® and a pulse contour analysis algorithm, Pulse Decomposition Analysis (PDA). PDA's premise is that the peripheral arterial pressure pulse is a superposition of five individual component pressure pulses that are due to the left ventricular ejection and reflections and re-reflections from only two reflection sites within the central arteries.The hypothesis examined here is that the model's principal parameters P2P1 and T13 can be correlated with, respectively, systolic and pulse pressures. Central arterial blood pressures of patients (38 m/25 f, mean age: 62.7 y, SD: 11.5 y, mean height: 172.3 cm, SD: 9.7 cm, mean weight: 86.8 kg, SD: 20.1 kg) undergoing cardiac catheterization were monitored using central line catheters while the PDA parameters were extracted from the arterial pulse signal obtained non-invasively using CareTaker system. Qualitative validation of the model was achieved with the direct observation of the five component pressure pulses in the central arteries using central line catheters. Statistically significant correlations between P2P1 and systole and T13 and pulse pressure were established (systole: R square: 0.92 (p < 0.0001), diastole: R square: 0.78 (p < 0.0001). Bland-Altman comparisons between blood pressures obtained through the conversion of PDA parameters to blood pressures of non-invasively obtained pulse signatures with catheter-obtained blood pressures fell within the trend guidelines of the Association for the Advancement of Medical Instrumentation SP-10 standard (standard deviation: 8 mmHg(systole: 5.87 mmHg, diastole: 5.69 mmHg)). The results indicate that arterial blood pressure can be accurately measured and tracked

  9. LIFT: a nested decomposition algorithm for solving lower block triangular linear programs. Report AMD-859. [In PL/I for IBM 370

    SciTech Connect

    Ament, D; Ho, J; Loute, E; Remmelswaal, M

    1980-06-01

    Nested decomposition of linear programs is the result of a multilevel, hierarchical application of the Dantzig-Wolfe decomposition principle. The general structure is called lower block-triangular, and permits direct accounting of long-term effects of investment, service life, etc. LIFT, an algorithm for solving lower block triangular linear programs, is based on state-of-the-art modular LP software. The algorithmic and software aspects of LIFT are outlined, and computational results are presented. 5 figures, 6 tables. (RWR)

  10. Application of spectral decomposition algorithm for mapping water quality in a turbid lake (Lake Kasumigaura, Japan) from Landsat TM data

    NASA Astrophysics Data System (ADS)

    Oyama, Youichi; Matsushita, Bunkei; Fukushima, Takehiko; Matsushige, Kazuo; Imai, Akio

    The remote sensing of Case 2 water has been far less successful than that of Case 1 water, due mainly to the complex interactions among optically active substances (e.g., phytoplankton, suspended sediments, colored dissolved organic matter, and water) in the former. To address this problem, we developed a spectral decomposition algorithm (SDA), based on a spectral linear mixture modeling approach. Through a tank experiment, we found that the SDA-based models were superior to conventional empirical models (e.g. using single band, band ratio, or arithmetic calculation of band) for accurate estimates of water quality parameters. In this paper, we develop a method for applying the SDA to Landsat-5 TM data on Lake Kasumigaura, a eutrophic lake in Japan characterized by high concentrations of suspended sediment, for mapping chlorophyll-a (Chl-a) and non-phytoplankton suspended sediment (NPSS) distributions. The results show that the SDA-based estimation model can be obtained by a tank experiment. Moreover, by combining this estimation model with satellite-SRSs (standard reflectance spectra: i.e., spectral end-members) derived from bio-optical modeling, we can directly apply the model to a satellite image. The same SDA-based estimation model for Chl-a concentration was applied to two Landsat-5 TM images, one acquired in April 1994 and the other in February 2006. The average Chl-a estimation error between the two was 9.9%, a result that indicates the potential robustness of the SDA-based estimation model. The average estimation error of NPSS concentration from the 2006 Landsat-5 TM image was 15.9%. The key point for successfully applying the SDA-based estimation model to satellite data is the method used to obtain a suitable satellite-SRS for each end-member.

  11. An efficient floating-point to fixed-point conversion process for biometric algorithm on DaVinci DSP architecture

    NASA Astrophysics Data System (ADS)

    Konvalinka, Ira; Quddus, Azhar; Asraf, Daniel

    2009-05-01

    Today there is no direct path for the conversion of a floating-point algorithm implementation to an optimized fixed-point implementation. This paper proposes a novel and efficient methodology for Floating-point to Fixed-point Conversion (FFC) of biometric Fingerprint Algorithm Library (FAL) on fixed-point DaVinci processor. A general FFC research task is streamlined along smaller tasks which can be accomplished with lower effort and higher certainty. Formally specified in this paper is the optimization target in FFC, to preserve floating-point accuracy and to reduce execution time, while preserving the majority of algorithm code base. A comprehensive eight point strategy is formulated to achieve that target. Both local (focused on the most time consuming routines) and global optimization flow (to optimize across multiple routines) are used. Characteristic phases in the FFC activity are presented using data from employing the proposed FFC methodology to FAL, starting with target optimization specification, to speed optimization breakthroughs, finalized with validation of FAL accuracy after the execution time optimization. FAL implementation resulted in biometric verification time reduction for over a factor of 5, with negligible impact on accuracy. Any algorithm developer facing the task of implementing his floating-point algorithm on DaVinci DSP is expected to benefit from this presentation.

  12. Technical Note: MRI only prostate radiotherapy planning using the statistical decomposition algorithm

    SciTech Connect

    Siversson, Carl; Nordström, Fredrik; Nilsson, Terese; Nyholm, Tufve; Jonsson, Joakim; Gunnlaugsson, Adalsteinn; Olsson, Lars E.

    2015-10-15

    Purpose: In order to enable a magnetic resonance imaging (MRI) only workflow in radiotherapy treatment planning, methods are required for generating Hounsfield unit (HU) maps (i.e., synthetic computed tomography, sCT) for dose calculations, directly from MRI. The Statistical Decomposition Algorithm (SDA) is a method for automatically generating sCT images from a single MR image volume, based on automatic tissue classification in combination with a model trained using a multimodal template material. This study compares dose calculations between sCT generated by the SDA and conventional CT in the male pelvic region. Methods: The study comprised ten prostate cancer patients, for whom a 3D T2 weighted MRI and a conventional planning CT were acquired. For each patient, sCT images were generated from the acquired MRI using the SDA. In order to decouple the effect of variations in patient geometry between imaging modalities from the effect of uncertainties in the SDA, the conventional CT was nonrigidly registered to the MRI to assure that their geometries were well aligned. For each patient, a volumetric modulated arc therapy plan was created for the registered CT (rCT) and recalculated for both the sCT and the conventional CT. The results were evaluated using several methods, including mean average error (MAE), a set of dose-volume histogram parameters, and a restrictive gamma criterion (2% local dose/1 mm). Results: The MAE within the body contour was 36.5 ± 4.1 (1 s.d.) HU between sCT and rCT. Average mean absorbed dose difference to target was 0.0% ± 0.2% (1 s.d.) between sCT and rCT, whereas it was −0.3% ± 0.3% (1 s.d.) between CT and rCT. The average gamma pass rate was 99.9% for sCT vs rCT, whereas it was 90.3% for CT vs rCT. Conclusions: The SDA enables a highly accurate MRI only workflow in prostate radiotherapy planning. The dosimetric uncertainties originating from the SDA appear negligible and are notably lower than the uncertainties

  13. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  14. Spatiospectral decomposition of multi-subject EEG: evaluating blind source separation algorithms on real and realistic simulated data

    PubMed Central

    Bridwell, David A.; Rachakonda, Srinivas; Rogers, F. Silva; Pearlson, Godfrey D.; Calhoun, Vince D.

    2016-01-01

    Electroencephalographic (EEG) oscillations predominantly appear with periods between 1 second (1 Hz) and 20 ms (50 Hz), and are subdivided into distinct frequency bands which appear to correspond to distinct cognitive processes. A variety of blind source separation (BSS) approaches have been developed and implemented within the past few decades, providing an improved isolation of these distinct processes. Within the present study, we demonstrate the feasibility of multi-subject BSS for deriving distinct EEG spatiospectral maps. Multi-subject spatiospectral EEG decompositions were implemented using the EEGIFT toolbox (http:/mialab.mrn.org/software/eegift/) with real and realistic simulated datasets (the simulation code is available at http://mialab.mrn.org/software/simeeg). Twelve different decomposition algorithms were evaluated. Within the simulated data, WASOBI and COMBI appeared to be the best performing algorithms, as they decomposed the four sources across a range of component numbers and noise levels. RADICAL ICA, ERBM, INFOMAX ICA, ICA EBM, FAST ICA, and JADE OPAC decomposed a subset of sources within a smaller range of component numbers and noise levels. INFOMAX ICA, FAST ICA, WASOBI, and COMBI generated the largest number of stable sources within the real dataset and provided partially distinct views of underlying spatiospectral maps. We recommend the multi-subject BSS approach and the selected algorithms for further studies examining distinct spatiospectral networks within healthy and clinical populations. PMID:26909688

  15. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  16. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  17. Formulation and error analysis for a generalized image point correspondence algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar

    1992-01-01

    A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.

  18. A hardware-oriented algorithm for floating-point function generation

    NASA Technical Reports Server (NTRS)

    O'Grady, E. Pearse; Young, Baek-Kyu

    1991-01-01

    An algorithm is presented for performing accurate, high-speed, floating-point function generation for univariate functions defined at arbitrary breakpoints. Rapid identification of the breakpoint interval, which includes the input argument, is shown to be the key operation in the algorithm. A hardware implementation which makes extensive use of read/write memories is used to illustrate the algorithm.

  19. A hardware-oriented algorithm for floating-point function generation

    NASA Technical Reports Server (NTRS)

    O'Grady, E. Pearse; Young, Baek-Kyu

    1991-01-01

    An algorithm is presented for performing accurate, high-speed, floating-point function generation for univariate functions defined at arbitrary breakpoints. Rapid identification of the breakpoint interval, which includes the input argument, is shown to be the key operation in the algorithm. A hardware implementation which makes extensive use of read/write memories is used to illustrate the algorithm.

  20. A single-point model from SO(3) decomposition of the axisymmetric mean-flow coupled two-point equations

    NASA Astrophysics Data System (ADS)

    Clark, Timothy; Rubinstein, Robert; Kurien, Susan

    2016-11-01

    The fluctuating-pressure-strain correlations present a significant challenge for engineering turbulence models. For incompressible flow, the pressure is an intrinsically two-point quantity (represented as Green's function, integrated over the field), and therefore representing the implied scale-dependence in a one-point model is difficult. The pioneering work of Launder, Reece and Rodi (1975) presented a model that satisfied the tensor symmetries and dimensional consistency with the underlying Green's function solution, and described the assumptions embedded in their one-point model. Among the constraints of such a model is its inability to capture scale-dependent anisotropic flow development. Restricting our attention to the case of axisymmetric mean-field strains, we present a one-point model of the mean-flow couplings, including the pressure-strain terms, starting from a directional (tensorially isotropic) and polarization (tensorially anisotropic and trace-free) representation of the two-point correlation equations, truncated to the lowest order terms. The model results are then compared to simulations performed using arbitrary orders of spherical harmonic functions from which the exact solution may be obtained to desired accuracy.

  1. Design method and algorithms for directed self-assembly aware via layout decomposition in sub-7 nm circuits

    NASA Astrophysics Data System (ADS)

    Karageorgos, Ioannis; Ryckaert, Julien; Gronheid, Roel; Tung, Maryann C.; Wong, H.-S. Philip; Karageorgos, Evangelos; Croes, Kris; Bekaert, Joost; Vandenberghe, Geert; Stucchi, Michele; Dehaene, Wim

    2016-10-01

    Major advancements in the directed self-assembly (DSA) of block copolymers have shown the technique's strong potential for via layer patterning in advanced technology nodes. Molecular scale pattern precision along with low cost processing promotes DSA technology as a great candidate for complementing conventional photolithography. Our studies show that decomposition of via layers with 193-nm immersion lithography in realistic circuits below the 7-nm node would require a prohibitive number of multiple patterning steps. The grouping of vias through templated DSA can resolve local conflicts in high density areas, limiting the number of required masks, and thus cutting a great deal of the associated costs. A design method for DSA via patterning in sub-7-nm nodes is discussed. We present options to expand the list of usable DSA templates and we formulate cost functions and algorithms for the optimal DSA-aware via layout decomposition. The proposed method works a posteriori, after place-and-route, allowing for fast practical implementation. We tested this method on a fully routed 32-bit processor designed for sub-7 nm technology nodes. Our results demonstrate a reduction of up to four lithography masks when compared to conventional non-DSA-aware decomposition.

  2. An Efficient Exact Quantum Algorithm for the Integer Square-free Decomposition Problem.

    PubMed

    Li, Jun; Peng, Xinhua; Du, Jiangfeng; Suter, Dieter

    2012-01-01

    Quantum computers are known to be qualitatively more powerful than classical computers, but so far only a small number of different algorithms have been discovered that actually use this potential. It would therefore be highly desirable to develop other types of quantum algorithms that widen the range of possible applications. Here we propose an efficient and exact quantum algorithm for finding the square-free part of a large integer - a problem for which no efficient classical algorithm exists. The algorithm relies on properties of Gauss sums and uses the quantum Fourier transform. We give an explicit quantum network for the algorithm. Our algorithm introduces new concepts and methods that have not been used in quantum information processing so far and may be applicable to a wider class of problems.

  3. Linearly convergent inexact proximal point algorithm for minimization. Revision 1

    SciTech Connect

    Zhu, C.

    1993-08-01

    In this paper, we propose a linearly convergent inexact PPA for minimization, where the inner loop stops when the relative reduction on the residue (defined as the objective value minus the optimal value) of the inner loop subproblem meets some preassigned constant. This inner loop stopping criterion can be achieved in a fixed number of iterations if the inner loop algorithm has a linear rate on the regularized subproblems. Therefore the algorithm is able to avoid the computationally expensive process of solving the inner loop subproblems exactly or asymptotically accurately; a process required by most of the other linearly convergent PPAs. As applications of this inexact PPA, we develop linearly convergent iteration schemes for minimizing functions with singular Hessian matrices, and for solving hemiquadratic extended linear-quadratic programming problems. We also prove that Correa-Lemarechal`s ``implementable form`` of PPA converges linearly under mild conditions.

  4. Comparison between one-point calibration and two-point calibration approaches in a continuous glucose monitoring algorithm.

    PubMed

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl; Hejlesen, Ole

    2014-07-01

    The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach.

  5. Preservation of quadrature Doppler signals from bidirectional slow blood flow close to the vessel wall using an adaptive decomposition algorithm.

    PubMed

    Zhang, Yufeng; Shi, Xinling; Zhang, Kexin; Chen, Jianhua

    2009-03-01

    A novel approach based on the phasing-filter (PF) technique and the empirical mode decomposition (EMD) algorithm is proposed to preserve quadrature Doppler signal components from bidirectional slow blood flow close to the vessel wall. Bidirectional mixed Doppler ultrasound signals, which were echoed from the forward and reverse moving blood and vessel wall, were initially separated to avoid the phase distortion of quadrature Doppler signals (which is induced from direct decomposition by the nonlinear EMD processing). Separated unidirectional mixed Doppler signals were decomposed into intrinsic mode functions (IMFs) using the EMD algorithm and the relevant IMFs that contribute to blood flow components were identified and summed to give the blood flow signals, whereby only the components from the bidirectional slow blood flow close to the vessel wall were retained independently. The complex quadrature Doppler blood flow signal was reconstructed from a combination of the extracted unidirectional Doppler blood flow signals. The proposed approach was applied to simulated and clinical Doppler signals. It is concluded from the experimental results that this approach is practical for the preservation of quadrature Doppler signal components from the bidirectional slow blood flow close to the vessel wall, and may provide more diagnostic information for the diagnosis and treatment of vascular diseases.

  6. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  7. Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure

    NASA Astrophysics Data System (ADS)

    Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi

    2016-09-01

    The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising.

  8. Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure

    PubMed Central

    Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi

    2016-01-01

    The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising. PMID:27670156

  9. The QR-Decomposition Based Least-Squares Lattice Algorithm for Adaptive Filtering

    DTIC Science & Technology

    1990-07-01

    Copyright © Controller HMSO London 1990 THIS PAGE IS LEFT BLANI( INTENTIONALLY CONTENTS 1. INTRODUCION ...bring together the work of Lewis [9] and McWhiner[ 14]. Lewis began with the standard (covariance domain) multi-channel least squares lattice equations...decomposition. As the bulk of the calculation is exactly the computation of the reflection coefficients. Lewis pmoceeded no further with this re-formu

  10. Multidirectional hybrid algorithm for the split common fixed point problem and application to the split common null point problem.

    PubMed

    Li, Xia; Guo, Meifang; Su, Yongfu

    2016-01-01

    In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .

  11. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    NASA Astrophysics Data System (ADS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-03-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct.

  12. An efficient, robust, domain-decomposition algorithm for particle Monte Carlo

    NASA Astrophysics Data System (ADS)

    Brunner, Thomas A.; Brantley, Patrick S.

    2009-06-01

    A previously described algorithm [T.A. Brunner, T.J. Urbatsch, T.M. Evans, N.A. Gentile, Comparison of four parallel algorithms for domain decomposed implicit Monte Carlo, Journal of Computational Physics 212 (2) (2006) 527-539] for doing domain decomposed particle Monte Carlo calculations in the context of thermal radiation transport has been improved. It has been extended to support cases where the number of particles in a time step are unknown at the beginning of the time step. This situation arises when various physical processes, such as neutron transport, can generate additional particles during the time step, or when particle splitting is used for variance reduction. Additionally, several race conditions that existed in the previous algorithm and could cause code hangs have been fixed. This new algorithm is believed to be robust against all race conditions. The parallel scalability of the new algorithm remains excellent.

  13. An efficient, robust, domain-decomposition algorithm for particle Monte Carlo

    SciTech Connect

    Brunner, Thomas A. Brantley, Patrick S.

    2009-06-01

    A previously described algorithm [T.A. Brunner, T.J. Urbatsch, T.M. Evans, N.A. Gentile, Comparison of four parallel algorithms for domain decomposed implicit Monte Carlo, Journal of Computational Physics 212 (2) (2006) 527-539] for doing domain decomposed particle Monte Carlo calculations in the context of thermal radiation transport has been improved. It has been extended to support cases where the number of particles in a time step are unknown at the beginning of the time step. This situation arises when various physical processes, such as neutron transport, can generate additional particles during the time step, or when particle splitting is used for variance reduction. Additionally, several race conditions that existed in the previous algorithm and could cause code hangs have been fixed. This new algorithm is believed to be robust against all race conditions. The parallel scalability of the new algorithm remains excellent.

  14. Parallel algorithms for separation of two sets of points and recognition of digital convex polygons

    SciTech Connect

    Sarkar, D. ); Stojmenovic, I. )

    1992-04-01

    Given two finite sets of points in a plane, the polygon separation problem is to construct a separating convex k-gon with smallest k. In this paper, we present a parallel algorithm for the polygon separation problem. The algorithm runs in O(log n) time on a CREW PRAM with n processors, where n is the number of points in the two given sets. The algorithm is cost-optimal, since [Omega](n log n) is a lower-bound for the first time needed by any sequential algorithm. We apply this algorithm to the problem of finding a convex region is its digital image. The algorithm in this paper constructs one such polygon with possibly two more edges than the minimal one.

  15. An ISAR imaging algorithm for the space satellite based on empirical mode decomposition theory

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Dong, Chun-zhu

    2014-11-01

    Currently, high resolution imaging of the space satellite is a popular topic in the field of radar technology. In contrast with regular targets, the satellite target often moves along with its trajectory and simultaneously its solar panel substrate changes the direction toward the sun to obtain energy. Aiming at the imaging problem, a signal separating and imaging approach based on the empirical mode decomposition (EMD) theory is proposed, and the approach can realize separating the signal of two parts in the satellite target, the main body and the solar panel substrate and imaging for the target. The simulation experimentation can demonstrate the validity of the proposed method.

  16. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    NASA Technical Reports Server (NTRS)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  17. Indoor Scene Point Cloud Registration Algorithm Based on RGB-D Camera Calibration

    PubMed Central

    Huang, Chih-Hung

    2017-01-01

    With the increasing popularity of RGB-depth (RGB-D) sensor, research on the use of RGB-D sensors to reconstruct three-dimensional (3D) indoor scenes has gained more and more attention. In this paper, an automatic point cloud registration algorithm is proposed to efficiently handle the task of 3D indoor scene reconstruction using pan-tilt platforms on a fixed position. The proposed algorithm aims to align multiple point clouds using extrinsic parameters of the RGB-D camera obtained from every preset pan-tilt control point. A computationally efficient global registration method is proposed based on transformation matrices formed by the offline calibrated extrinsic parameters. Then, a local registration method, which is an optional operation in the proposed algorithm, is employed to refine the preliminary alignment result. Experimental results validate the quality and computational efficiency of the proposed point cloud alignment algorithm by comparing it with two state-of-the-art methods. PMID:28809787

  18. Bayesian Nonnegative CP Decomposition-Based Feature Extraction Algorithm for Drowsiness Detection.

    PubMed

    Qian, Dong; Wang, Bei; Qing, Xiangyun; Zhang, Tao; Zhang, Yu; Wang, Xingyu; Nakamura, Masatoshi

    2017-08-01

    Daytime short nap involves physiological processes, such as alertness, drowsiness and sleep. The study of the relationship between drowsiness and nap based on physiological signals is a great way to have a better understanding of the periodical rhymes of physiological states. A model of Bayesian nonnegative CP decomposition (BNCPD) was proposed to extract common multiway features from the group-level electroencephalogram (EEG) signals. As an extension of the nonnegative CP decomposition, the BNCPD model involves prior distributions of factor matrices, while the underlying CP rank could be determined automatically based on a Bayesian nonparametric approach. In terms of computational speed, variational inference was applied to approximate the posterior distributions of unknowns. Extensive simulations on the synthetic data illustrated the capability of our model to recover the true CP rank. As a real-world application, the performance of drowsiness detection during daytime short nap by using the BNCPD-based features was compared with that of other traditional feature extraction methods. Experimental results indicated that the BNCPD model outperformed other methods for feature extraction in terms of two evaluation metrics, as well as different parameter settings. Our approach is likely to be a useful tool for automatic CP rank determination and offering a plausible multiway physiological information of individual states.

  19. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation.

    PubMed

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-11-30

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model.

  20. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation

    PubMed Central

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  1. A BVMF-B algorithm for nonconvex nonlinear regularized decomposition of spectral x-ray projection images

    NASA Astrophysics Data System (ADS)

    Pham, Mai Quyen; Ducros, Nicolas; Nicolas, Barbara

    2017-03-01

    Spectral computed tomography (CT) exploits the measurements obtained by a photon counting detector to reconstruct the chemical composition of an object. In particular, spectral CT has shown a very good ability to image K-edge contrast agent. Spectral CT is an inverse problem that can be addressed solving two subproblems, namely the basis material decomposition (BMD) problem and the tomographic reconstruction problem. In this work, we focus on the BMD problem, which is ill-posed and nonlinear. The BDM problem is classically either linearized, which enables reconstruction based on compressed sensing methods, or nonlinearly solved with no explicit regularization scheme. In a previous communication, we proposed a nonlinear regularized Gauss-Newton (GN) algorithm.1 However, this algorithm can only be applied to convex regularization functionals. In particular, the lp (p < 1) norm or the `0 quasi-norm, which are known to provider sparse solutions, cannot be considered. In order to better promote the sparsity of contrast agent images, we propose a nonlinear reconstruction framework that can handle nonconvex regularization terms. In particular, the l1/l2 norm ratio is considered.2 The problem is solved iteratively using the block variable metric forward-backward (BVMF-B) algorithm,3 which can also enforce the positivity of the material images. The proposed method is validated on numerical data simulated in a thorax phantom made of soft tissue, bone and gadolinium, which is scanned with a 90-kV x-ray tube and a 3-bin photon counting detector.

  2. Improved scaling of time-evolving block-decimation algorithm through reduced-rank randomized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Tamascelli, D.; Rosenbach, R.; Plenio, M. B.

    2015-06-01

    When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the time-evolving block-decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the singular value decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied and demonstrate that for those systems RRSVD delivers results as accurate as state-of-the-art deterministic SVD routines.

  3. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    PubMed Central

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  4. Iterative most-likely point registration (IMLP): a robust algorithm for computing optimal shape alignment.

    PubMed

    Billings, Seth D; Boctor, Emad M; Taylor, Russell H

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP's probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes.

  5. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  6. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  7. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT.

    PubMed

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K

    2016-02-07

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from to , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  8. Parameter Space of Fixed Points of the Damped Driven Pendulum Susceptible to Control of Chaos Algorithms

    NASA Astrophysics Data System (ADS)

    Dittmore, Andrew; Trail, Collin; Olsen, Thomas; Wiener, Richard J.

    2003-11-01

    We have previously demonstrated the experimental control of chaos in a Modified Taylor-Couette system with hourglass geometry( Richard J. Wiener et al), Phys. Rev. Lett. 83, 2340 (1999).. Identifying fixed points susceptible to algorithms for the control of chaos is key. We seek to learn about this process in the accessible numerical model of the damped, driven pendulum. Following Baker(Gregory L. Baker, Am. J. Phys. 63), 832 (1995)., we seek points susceptible to the OGY(E. Ott, C. Grebogi, and J. A. Yorke, Phys. Rev. Lett. 64), 1196 (1990). algorithm. We automate the search for fixed points that are candidates for control. We present comparisons of the space of candidate fixed points with the bifurcation diagrams and Poincare sections of the system. We demonstrate control at fixed points which do not appear on the attractor. We also show that the control algorithm may be employed to shift the system between non-communicating branches of the attractor.

  9. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature.

  10. Double-patterning decomposition, design compliance, and verification algorithms at 32nm hp

    NASA Astrophysics Data System (ADS)

    Tritchkov, Alexander; Glotov, Petr; Komirenko, Sergiy; Sahouria, Emile; Torres, Andres; Seoud, Ahmed; Wiaux, Vincent

    2008-10-01

    Double patterning (DP) technology is one of the main candidates for RET of critical layers at 32nm hp. DP technology is a strong RET technique that must be considered throughout the IC design and post tapeout flows. We present a complete DP technology strategy including a DRC/DFM component, physical synthesis support and mask synthesis. In particular, the methodology contains: - A DRC-like layout DP compliance and design verification functions; - A parameterization scheme that codifies manufacturing knowledge and capability; - Judicious use of physical effect simulation to improve double-patterning quality; - An efficient, high capacity mask synthesis function for post-tapeout processing; - A verification function to determine the correctness and qualify of a DP solution; Double patterning technology requires decomposition of the design to relax the pitch and effectively allows processing with k1 factors smaller than the theoretical Rayleigh limit of 0.25. The traditional DP processes Litho-Etch-Litho- Etch (LELE) [1] requires an additional develop and etch step, which eliminates the resolution degradation which occurs in multiple exposure processed in the same resist layer. The theoretical k1 for a double-patterning technology applied to a 32nm half-pitch design using a 1.35NA 193nm imaging system is 0.44, whereas the k1 for a single-patterning of this same design would be 0.22 [2], which is sub-resolution. This paper demonstrates the methods developed at Mentor Graphics for double patterning design compliance and decomposition in an effort to minimize the impact of mask-to-mask registration and process variance. It also demonstrates verification solution implementation in the chip design flow and post-tapeout flow.

  11. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  12. A Jitter-Mitigating High Gain Antenna Pointing Algorithm for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia; Blaurock, Carl

    2007-01-01

    This paper details a High Gain Antenna (HGA) pointing algorithm which mitigates jitter during the motion of the antennas on the Solar Dynamics Observatory (SDO) spacecraft. SDO has two HGAs which point towards the Earth and send data to a ground station at a high rate. These antennas are required to track the ground station during the spacecraft Inertial and Science modes, which include periods of inertial Sunpointing as well as calibration slews. The HGAs also experience handoff seasons, where the antennas trade off between pointing at the ground station and pointing away from the Earth. The science instruments on SDO require fine Sun pointing and have a very low jitter tolerance. Analysis showed that the nominal tracking and slewing motions of the antennas cause enough jitter to exceed the HGA portion of the jitter budget. The HGA pointing control algorithm was expanded from its original form as a means to mitigate the jitter.

  13. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  14. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    SciTech Connect

    Skala, Vaclav

    2016-06-08

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  15. Dominant feature selection for the fault diagnosis of rotary machines using modified genetic algorithm and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Yan, Jihong; de Silva, Clarence W.

    2015-05-01

    This paper develops a novel dominant feature selection method using a genetic algorithm with a dynamic searching strategy. It is applied in the search for the most representative features in rotary mechanical fault diagnosis, and is shown to improve the classification performance with fewer features. First, empirical mode decomposition (EMD) is employed to decompose a vibration signal into intrinsic mode functions (IMFs) which represent the signal characteristic with sample oscillatory modes. Then, a modified genetic algorithm with variable-range encoding and dynamic searching strategy is used to establish relationships between optimized feature subsets and the classification performance. Next, a statistical model that uses receiver operating characteristic (ROC) is developed to select dominant features. Finally, support vector machine (SVM) is used to classify different fault patterns. Two real-world problems, rotor-unbalance vibration and bearing corrosion, are employed to evaluate the proposed feature selection scheme and fault diagnosis system. Statistical results obtained by analyzing the two problems, and comparative studies with five well-known feature selection techniques, demonstrate that the method developed in this paper can achieve improvements in identification accuracy with lower feature dimensionality. In addition, the results indicate that the proposed method is a promising tool to select dominant features in rotary machinery fault diagnosis.

  16. A multiple wavelength algorithm in color image analysis and its applications in stain decomposition in microscopy images.

    PubMed

    Zhou, R; Hammond, E H; Parker, D L

    1996-12-01

    Stains have been used in optical microscopy to visualize the distribution and intensity of substances to which they are attached. Quantitative measures of optical density in the microscopic images can in principle be used to determine the amount of the stain. When multiple dyes are used to simultaneously visualize several substances to which they are specifically attached, quantification of each stain cannot be made using any single wavelength because attenuation from the several stain components contributes to the total optical density. Although various dyes used as optical stains are perceived as specific colors, they, in fact, have complex attenuation spectra. In this paper, we present a technique for multiple wavelength image acquisition and spectral decomposition based upon the Lambert-Beer absorption law. This algorithm is implemented based on the different spectral properties of the various stain components. By using images captured at N wavelengths, N components with different colors can be separated. This algorithm is applied to microscopy images of doubly and triply labeled prostate tissue sections. Possible applications are discussed.

  17. Infrared point target detection based on exponentially weighted RLS algorithm and dual solution improvement

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Fan, Xiang; Ma, Dong-hui; Cheng, Zheng-dong

    2009-07-01

    The desire to maximize target detection range focuses attention on algorithms for detecting and tracking point targets. However, point target detection and tracking is a challenging task for two difficulties: the one is targets occupying only a few pixels or less in the complex noise and background clutter; the other is the requirement of computational load for real-time applications. Temporal signal processing algorithms offer superior clutter rejection to that of the standard spatial processing approaches. In this paper, the traditional single frame algorithm based on the background prediction is improved to consecutive multi-frames exponentially weighted recursive least squared (EWRLS) algorithm. Farther, the dual solution of EWRLS (DEWLS) is deduced to reduce the computational burden. DEWLS algorithm only uses the inner product of the points pair in training set. The predict result is given directly without compute any middle variable. Experimental results show that the RLS filter can largely increase the signal to noise ratio (SNR) of images; it has the best detection performance than other mentioned algorithms; moving targets can be detected within 2 or 3 frames with lower false alarm. Moreover, whit the dual solution improvement, the computational efficiency is enhanced over 41% to the EWRLS algorithm.

  18. The removal of wall components in Doppler ultrasound signals by using the empirical mode decomposition algorithm.

    PubMed

    Zhang, Yufeng; Gao, Yali; Wang, Le; Chen, Jianhua; Shi, Xinling

    2007-09-01

    Doppler ultrasound systems, used for the noninvasive detection of the vascular diseases, normally employ a high-pass filter (HPF) to remove the large, low-frequency components from the vessel wall from the blood flow signal. Unfortunately, the filter also removes the low-frequency Doppler signals arising from slow-moving blood. In this paper, we propose to use a novel technique, called the empirical mode decomposition (EMD), to remove the wall components from the mixed signals. The EMD is firstly to decompose a signal into a finite and usually small number of individual components named intrinsic mode functions (IMFs). Then a strategy based on the ratios between two adjacent values of the wall-to-blood signal ratio (WBSR) has been developed to automatically identify and remove the relevant IMFs that contribute to the wall components. This method is applied to process the simulated and clinical Doppler ultrasound signals. Compared with the results based on the traditional high-pass filter, the new approach obtains improved performance for wall components removal from the mixed signals effectively and objectively, and provides us with more accurate low blood flow.

  19. A truncated generalized singular value decomposition algorithm for moving force identification with ill-posed problems

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.

    2017-08-01

    This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.

  20. A classification algorithm based on Cloude decomposition model for fully polarimetric SAR image

    NASA Astrophysics Data System (ADS)

    Xiang, Hongmao; Liu, Shanwei; Zhuang, Ziqi; Zhang, Naixin

    2016-11-01

    Remote sensing is an important technology for monitoring coastal zone, but it is difficult to get effective optical data in cloudy or rainy weather. SAR is an important data source for monitoring the coastal zone because it cannot be restricted in all-weather. Fully polarimetric SAR data is more abundant than single polarization and multi-polarization SAR data. The experiment selected the fully polarimetric SAR image of Radarsat-2, which covered the Yellow River Estuary. In view of the features of the study area, we carried out the H/ α unsupervised classification, the H/ α -Wishart unsupervised classification and the H/ α -Wishart unsupervised classification based on the results of Cloude decomposition. A new classification method is proposed which used the Wishart supervised classification based on the result of H/ α -Wishart unsupervised classification. The experimental results showed that the new method effectively overcome the shortcoming of unsupervised classification and improved the classification accuracy significantly. It was also shown that the classification result of SAR image had the similar precision with that of Landsat-7 image by the same classification method, SAR image had a better precision of water classification due to its sensitivity for water, and Landsat-7 image had a better precision of vegetation types.

  1. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  2. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications.

  3. Performance Evaluation of Different Ground Filtering Algorithms for Uav-Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Serifoglu, C.; Gungor, O.; Yilmaz, V.

    2016-06-01

    Digital Elevation Model (DEM) generation is one of the leading application areas in geomatics. Since a DEM represents the bare earth surface, the very first step of generating a DEM is to separate the ground and non-ground points, which is called ground filtering. Once the point cloud is filtered, the ground points are interpolated to generate the DEM. LiDAR (Light Detection and Ranging) point clouds have been used in many applications thanks to their success in representing the objects they belong to. Hence, in the literature, various ground filtering algorithms have been reported to filter the LiDAR data. Since the LiDAR data acquisition is still a costly process, using point clouds generated from the UAV images to produce DEMs is a reasonable alternative. In this study, point clouds with three different densities were generated from the aerial photos taken from a UAV (Unmanned Aerial Vehicle) to examine the effect of point density on filtering performance. The point clouds were then filtered by means of five different ground filtering algorithms as Progressive Morphological 1D (PM1D), Progressive Morphological 2D (PM2D), Maximum Local Slope (MLS), Elevation Threshold with Expand Window (ETEW) and Adaptive TIN (ATIN). The filtering performance of each algorithm was investigated qualitatively and quantitatively. The results indicated that the ATIN and PM2D algorithms showed the best overall ground filtering performances. The MLS and ETEW algorithms were found as the least successful ones. It was concluded that the point clouds generated from the UAVs can be a good alternative for LiDAR data.

  4. A new algorithm for computing multivariate Gauss-like quadrature points.

    SciTech Connect

    Taylor, Mark A.; Bos, Len P.; Wingate, Beth A.

    2004-06-01

    The diagonal-mass-matrix spectral element method has proven very successful in geophysical applications dominated by wave propagation. For these problems, the ability to run fully explicit time stepping schemes at relatively high order makes the method more competitive then finite element methods which require the inversion of a mass matrix. The method relies on Gauss-Lobatto points to be successful, since the grid points used are required to produce well conditioned polynomial interpolants, and be high quality 'Gauss-like' quadrature points that exactly integrate a space of polynomials of higher dimension than the number of quadrature points. These two requirements have traditionally limited the diagonal-mass-matrix spectral element method to use square or quadrilateral elements, where tensor products of Gauss-Lobatto points can be used. In non-tensor product domains such as the triangle, both optimal interpolation points and Gauss-like quadrature points are difficult to construct and there are few analytic results. To extend the diagonal-mass-matrix spectral element method to (for example) triangular elements, one must find appropriate points numerically. One successful approach has been to perform numerical searches for high quality interpolation points, as measured by the Lebesgue constant (Such as minimum energy electrostatic points and Fekete points). However, these points typically do not have any Gauss-like quadrature properties. In this work, we describe a new numerical method to look for Gauss-like quadrature points in the triangle, based on a previous algorithm for computing Fekete points. Performing a brute force search for such points is extremely difficult. A common strategy to increase the numerical efficiency of these searches is to reduce the number of unknowns by imposing symmetry conditions on the quadrature points. Motivated by spectral element methods, we propose a different way to reduce the number of unknowns: We look for quadrature formula

  5. Parallel Decomposition of the Fictitious Lagrangian Algorithm and its Accuracy for Molecular Dynamics Simulations of Semiconductors.

    NASA Astrophysics Data System (ADS)

    Yeh, Mei-Ling

    We have performed a parallel decomposition of the fictitious Lagrangian method for molecular dynamics with tight-binding total energy expression into the hypercube computer. This is the first time in literature that the dynamical simulation of semiconducting systems containing more than 512 silicon atoms has become possible with the electrons treated as quantum particles. With the utilization of the Intel Paragon system, our timing analysis predicts that our code is expected to perform realistic simulations on very large systems consisting of thousands of atoms with time requirements of the order of tens of hours. Timing results and performance analysis of our parallel code are presented in terms of calculation time, communication time, and setup time. The accuracy of the fictitious Lagrangian method in molecular dynamics simulation is also investigated, especially the energy conservation of the total energy of ions. We find that the accuracy of the fictitious Lagrangian scheme in small silicon cluster and very large silicon system simulations is good for as long as the simulations proceed, even though we quench the electronic coordinates to the Born-Oppenheimer surface only in the beginning of the run. The kinetic energy of electrons does not increase as time goes on, and the energy conservation of the ionic subsystem remains very good. This means that, as far as the ionic subsystem is concerned, the electrons are on the average in the true quantum ground states. We also tie up some odds and ends regarding a few remaining questions about the fictitious Lagrangian method, such as the difference between the results obtained from the Gram-Schmidt and SHAKE method of orthonormalization, and differences between simulations where the electrons are quenched to the Born -Oppenheimer surface only once compared with periodic quenching.

  6. An optimized structure on FPGA of key point description in SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Chenyu; Peng, Jinlong; Zhu, En; Zou, Yuxin

    2015-12-01

    SIFT algorithm is one of the most significant and effective algorithms to describe the features of image in the field of image matching. To implement SIFT algorithm to hardware environment is apparently considerable and difficult. In this paper, we mainly discuss the realization of Key Point Description in SIFT algorithm, along with Matching process. In Key Point Description, we have proposed a new method of generating histograms, to avoid the rotation of adjacent regions and insure the rotational invariance. In Matching, we replace conventional Euclidean distance with Hamming distance. The results of the experiments fully prove that the structure we propose is real-time, accurate, and efficient. Future work is still needed to improve its performance in harsher conditions.

  7. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform.

    PubMed

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A; Kiefer, Richard; Rasmussen, Luke V; Pathak, Jyotishman; Denny, Joshua C; Thompson, William K

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository.

  8. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform

    PubMed Central

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  9. An Improved Progressive Triangulation Algorithm for Vehicle-Borne Laser Point Cloud

    NASA Astrophysics Data System (ADS)

    Wei, Z.; Ma, H.; Chen, X.; Liu, L.

    2017-09-01

    The application of classical progressive triangulation filter algorithm for airborne point cloud is very successful, however, there is a big difference between airborne point cloud and vehicle-borne laser point cloud in spatial distribution, density and other aspects. In this paper, a lot of experiments are carried out to improve the filter algorithm for vehicle-borne laser point cloud, which includes as follows: (1) Establish grid index, such as 0.1 meters, only retain the lowest points, which can greatly reduce the number of suspected ground points, and the filtering efficiency is improved significantly; (2) According to the vehicle-borne height and track line, the road face points can be roughly determined. Then the convolution operation is used to ensure the real road points, which are also the ground points. This method cannot have to relax the filter parameters (which will lead to more non-ground points) and ensure the integrity of the road boundary; (3) A method named as "get more and remove some" is proposed for solving the filtering faults at the tail of every points segment caused by the incline scanning face. After the three steps, the filtering is improved obviously on qualification and processing speed.

  10. Construction of point process adaptive filter algorithms for neural systems using sequential Monte Carlo methods.

    PubMed

    Ergün, Ayla; Barbieri, Riccardo; Eden, Uri T; Wilson, Matthew A; Brown, Emery N

    2007-03-01

    The stochastic state point process filter (SSPPF) and steepest descent point process filter (SDPPF) are adaptive filter algorithms for state estimation from point process observations that have been used to track neural receptive field plasticity and to decode the representations of biological signals in ensemble neural spiking activity. The SSPPF and SDPPF are constructed using, respectively, Gaussian and steepest descent approximations to the standard Bayes and Chapman-Kolmogorov (BCK) system of filter equations. To extend these approaches for constructing point process adaptive filters, we develop sequential Monte Carlo (SMC) approximations to the BCK equations in which the SSPPF and SDPPF serve as the proposal densities. We term the two new SMC point process filters SMC-PPFs and SMC-PPFD, respectively. We illustrate the new filter algorithms by decoding the wind stimulus magnitude from simulated neural spiking activity in the cricket cercal system. The SMC-PPFs and SMC-PPFD provide more accurate state estimates at low number of particles than a conventional bootstrap SMC filter algorithm in which the state transition probability density is the proposal density. We also use the SMC-PPFs algorithm to track the temporal evolution of a spatial receptive field of a rat hippocampal neuron recorded while the animal foraged in an open environment. Our results suggest an approach for constructing point process adaptive filters using SMC methods.

  11. Change Detection from differential airborne LiDAR using a weighted Anisotropic Iterative Closest Point Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Kusari, A.; Glennie, C. L.; Oskin, M. E.; Hinojosa-Corona, A.; Borsa, A. A.; Arrowsmith, R.

    2013-12-01

    Differential LiDAR (Light Detection and Ranging) from repeated surveys has recently emerged as an effective tool to measure three-dimensional (3D) change for applications such as quantifying slip and spatially distributed warping associated with earthquake ruptures, and examining the spatial distribution of beach erosion after hurricane impact. Currently, the primary method for determining 3D change is through the use of the iterative closest point (ICP) algorithm and its variants. However, all current studies using ICP have assumed that all LiDAR points in the compared point clouds have uniform accuracy. This assumption is simplistic given that the error for each LiDAR point is variable, and dependent upon highly variable factors such as target range, angle of incidence, and aircraft trajectory accuracy. Therefore, to rigorously determine spatial change, it would be ideal to model the random error for every LiDAR observation in the differential point cloud, and use these error estimates as apriori weights in the ICP algorithm. To test this approach, we implemented a rigorous LiDAR observation error propagation method to generate estimated random error for each point in a LiDAR point cloud, and then determine 3D displacements between two point clouds using an anistropic weighted ICP algorithm. The algorithm was evaluated by qualitatively and quantitatively comparing post earthquake slip estimates from the 2010 El Mayor-Cucapah Earthquake between a uniform weight and anistropically weighted ICP algorithm, using pre-event LiDAR collected in 2006 by Instituto Nacional de Estadística y Geografía (INEGI), and post-event LiDAR collected by The National Center for Airborne Laser Mapping (NCALM).

  12. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  13. An empirical evaluation of infrared clutter for point target detection algorithms

    NASA Astrophysics Data System (ADS)

    McKenzie, Mark; Wong, Sebastien; Gibbins, Danny

    2013-05-01

    This paper describes a study into the impact of local environmental conditions on the detection of point targets using wide field of view infrared sensors on airborne platforms. A survey of the common complexity metrics for measuring IR clutter, and common point target detection algorithms was conducted. A quantitative evaluation was performed using 20 hours of infrared imagery collected over a three month period from helicopter flights in a variety of clutter environments. The research method, samples of the IR data sets, and results of the correlation between environmental conditions, scene complexity metrics and point target detection algorithms are presented. The key findings of this work are that variations in IR detection performance can be attributed to a combination of environmental factors (but no single factor is sufficient to describe performance variations), and that historical clutter metrics are insufficient to describe the performance of modern detection algorithms.

  14. Dynamics of G-band bright points derived using two fully automated algorithms

    NASA Astrophysics Data System (ADS)

    Bodnárová, M.; Utz, D.; Rybák, J.; Hanslmeier, A.

    Small-scale magnetic field concentrations (˜ 1 kG) in the solar photosphere can be identified in the G-band of the solar spectrum as bright points. Study of the G-band bright points (GBPs) dynamics can help us in solving several questions related also to the coronal heating problem. Here a set of 142 G-band speckled images obtained using the Dutch Open Telescope (DOT) on October 19, 2005 are used to compare identification of the GBPs by two different fully automated identification algorithms: an algorithm developed by Utz et al. (2009a, 2009b) and an algorithm developed according to papers of Berger et al. (1995, 1998). Temporal and spatial tracking of the GBPs identified by both algorithms was performed resulting in distributions of lifetimes, sizes and velocities of the GBPs. The obtained results show that both algorithms give very similar values in the case of lifetime and velocity estimation of the GBPs, but they differ significantly in case of estimation of the GBPs sizes. This difference is caused by the fact that we have applied no additional exclusive criteria on the GBPs identified by the algorithm based on the work of Berger et al. (1995, 1998). Therefore we conclude that in a future study of the GBPs dynamics we will prefer to use the Utz's algorithm to perform identification and tracking of the GBPs in G-band images.

  15. Peak load demand forecasting using two-level discrete wavelet decomposition and neural network algorithm

    NASA Astrophysics Data System (ADS)

    Bunnoon, Pituk; Chalermyanont, Kusumal; Limsakul, Chusak

    2010-02-01

    This paper proposed the discrete transform and neural network algorithms to obtain the monthly peak load demand in mid term load forecasting. The mother wavelet daubechies2 (db2) is employed to decomposed, high pass filter and low pass filter signals from the original signal before using feed forward back propagation neural network to determine the forecasting results. The historical data records in 1997-2007 of Electricity Generating Authority of Thailand (EGAT) is used as reference. In this study, historical information of peak load demand(MW), mean temperature(Tmean), consumer price index (CPI), and industrial index (economic:IDI) are used as feature inputs of the network. The experimental results show that the Mean Absolute Percentage Error (MAPE) is approximately 4.32%. This forecasting results can be used for fuel planning and unit commitment of the power system in the future.

  16. Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm

    NASA Astrophysics Data System (ADS)

    Kaczałek, B.; Borkowski, A.

    2016-06-01

    The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.

  17. Experimental infrared point-source detection using an iterative generalized likelihood ratio test algorithm.

    PubMed

    Nichols, J M; Waterman, J R

    2017-03-01

    This work documents the performance of a recently proposed generalized likelihood ratio test (GLRT) algorithm in detecting thermal point-source targets against a sky background. A calibrated source is placed above the horizon at various ranges and then imaged using a mid-wave infrared camera. The proposed algorithm combines a so-called "shrinkage" estimator of the background covariance matrix and an iterative maximum likelihood estimator of the point-source parameters to produce the GLRT statistic. It is clearly shown that the proposed approach results in better detection performance than either standard energy detection or previous implementations of the GLRT detector.

  18. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization.

  19. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration

    NASA Astrophysics Data System (ADS)

    Chen, Peijun; Huang, Jianguo; Zhang, Xiaoqun

    2013-02-01

    Recently, the minimization of a sum of two convex functions has received considerable interest in a variational image restoration model. In this paper, we propose a general algorithmic framework for solving a separable convex minimization problem from the point of view of fixed point algorithms based on proximity operators (Moreau 1962 C. R. Acad. Sci., Paris I 255 2897-99). Motivated by proximal forward-backward splitting proposed in Combettes and Wajs (2005 Multiscale Model. Simul. 4 1168-200) and fixed point algorithms based on the proximity operator (FP2O) for image denoising (Micchelli et al 2011 Inverse Problems 27 45009-38), we design a primal-dual fixed point algorithm based on the proximity operator (PDFP2Oκ for κ ∈ [0, 1)) and obtain a scheme with a closed-form solution for each iteration. Using the firmly nonexpansive properties of the proximity operator and with the help of a special norm over a product space, we achieve the convergence of the proposed PDFP2Oκ algorithm. Moreover, under some stronger assumptions, we can prove the global linear convergence of the proposed algorithm. We also give the connection of the proposed algorithm with other existing first-order methods. Finally, we illustrate the efficiency of PDFP2Oκ through some numerical examples on image supper-resolution, computerized tomographic reconstruction and parallel magnetic resonance imaging. Generally speaking, our method PDFP2O (κ = 0) is comparable with other state-of-the-art methods in numerical performance, while it has some advantages on parameter selection in real applications.

  20. A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications

    NASA Astrophysics Data System (ADS)

    Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.

    2012-08-01

    The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.

  1. Modified Cholesky factorizations in interior-point algorithms for linear programming.

    SciTech Connect

    Wright, S.; Mathematics and Computer Science

    1999-01-01

    We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.

  2. Photoacoustic tomography from weak and noisy signals by using a pulse decomposition algorithm in the time-domain.

    PubMed

    Liu, Liangbing; Tao, Chao; Liu, XiaoJun; Deng, Mingxi; Wang, Senhua; Liu, Jun

    2015-10-19

    Photoacoustic tomography is a promising and rapidly developed methodology of biomedical imaging. It confronts an increasing urgent problem to reconstruct the image from weak and noisy photoacoustic signals, owing to its high benefit in extending the imaging depth and decreasing the dose of laser exposure. Based on the time-domain characteristics of photoacoustic signals, a pulse decomposition algorithm is proposed to reconstruct a photoacoustic image from signals with low signal-to-noise ratio. In this method, a photoacoustic signal is decomposed as the weighted summation of a set of pulses in the time-domain. Images are reconstructed from the weight factors, which are directly related to the optical absorption coefficient. Both simulation and experiment are conducted to test the performance of the method. Numerical simulations show that when the signal-to-noise ratio is -4 dB, the proposed method decreases the reconstruction error to about 17%, in comparison with the conventional back-projection method. Moreover, it can produce acceptable images even when the signal-to-noise ratio is decreased to -10 dB. Experiments show that, when the laser influence level is low, the proposed method achieves a relatively clean image of a hair phantom with some well preserved pattern details. The proposed method demonstrates imaging potential of photoacoustic tomography in expanding applications.

  3. Stitching algorithm of the images acquired from different points of fixation

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Pismenskova, M. M.

    2015-02-01

    Image mosaicing is the act of combining two or more images and is used in many applications in computer vision, image processing, and computer graphics. It aims to combine images such that no obstructive boundaries exist around overlapped regions and to create a mosaic image that exhibits as little distortion as possible from the original images. Most of the existing algorithms are the computationally complex and don't show good results always in obtaining of the stitched images, which are different: scale, light, various free points of view and others. In this paper we consider an algorithm which allows increasing the speed of processing in the case of stitching high-resolution images. We reduced the computational complexity used an edge image analysis and saliency map on high-detailisation areas. On detected areas are determined angles of rotation, scaling factors, the coefficients of the color correction and transformation matrix. We define key points using SURF detector and ignore false correspondences based on correlation analysis. The proposed algorithm allows to combine images from free points of view with the different color balances, time shutter and scale. We perform a comparative study and show that statistically, the new algorithm deliver good quality results compared to existing algorithms.

  4. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  5. A hybrid algorithm for multiple change-point detection in continuous measurements

    NASA Astrophysics Data System (ADS)

    Priyadarshana, W. J. R. M.; Polushina, T.; Sofronov, G.

    2013-10-01

    Array comparative genomic hybridization (aCGH) is one of the techniques that can be used to detect copy number variations in DNA sequences. It has been identified that abrupt changes in the human genome play a vital role in the progression and development of many diseases. We propose a hybrid algorithm that utilizes both the sequential techniques and the Cross-Entropy method to estimate the number of change points as well as their locations in aCGH data. We applied the proposed hybrid algorithm to both artificially generated data and real data to illustrate the usefulness of the methodology. Our results show that the proposed algorithm is an effective method to detect multiple change-points in continuous measurements.

  6. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm

    PubMed Central

    Mora-Pascual, Jerónimo M.; García-García, Alberto; Martínez-González, Pablo

    2016-01-01

    The Iterative Closest Point (ICP) algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results. PMID:27768714

  7. Point process algorithm: a new Bayesian approach for TPF-I planet signal extraction

    NASA Technical Reports Server (NTRS)

    Velusamy, T.; Marsh, K. A.; Ware, B.

    2005-01-01

    TPF-I capability for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm, a Bayesian technique for estracting planetary signals using the sine/cosine chopped outputs of a dual nulling interferometer.

  8. Point process algorithm: a new Bayesian approach for TPF-I planet signal extraction

    NASA Technical Reports Server (NTRS)

    Velusamy, T.; Marsh, K. A.; Ware, B.

    2005-01-01

    TPF-I capability for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm, a Bayesian technique for estracting planetary signals using the sine/cosine chopped outputs of a dual nulling interferometer.

  9. Redistricting in a GIS environment: An optimisation algorithm using switching-points

    NASA Astrophysics Data System (ADS)

    Macmillan, W.

    This paper gives details of an algorithm whose purpose is to partition a set of populated zones into contiguous regions in order to minimise the difference in population size between the regions. The algorithm, known as SARA, uses simulated annealing and a new method for checking the contiguity of regions. It is the latter which allows the algorithm to be used to tackle large problems with modest computing resources. The paper describes the new contiguity checking procedure, based on the concept of switching points, and compares it with the connectivity method developed by Openshaw and Rao [1]. It goes on to give a detailed description of the algorithm, then concludes with a brief discussion of possible extensions to accommodate additional zone-design criteria.

  10. Ancient architecture point cloud data segmentation based on modified fuzzy C-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghong; Li, Deren; Wang, Yanmin

    2008-12-01

    Segmentation of Point cloud data is a key but difficult problem for architecture 3D reconstruction. Because compared to reverse engineering, there are more noise in ancient architecture point cloud data of edge because of mirror reflection and the traditional methods are hard that is not fuzzy in the preceding part of this paper, these methods can't embody the case of the points of borderline belonging two regions and it is difficult to satisfy demands of segmentation of ancient architecture point cloud data. Ancient architecture is mostly composed of columniation, plinth, arch, girder and tile on specifically order. Each of the component's surfaces is regular and smooth and belongingness of borderline points is very blurry. According to the character the author proposed a modified Fuzzy C-means clustering (MFCM) algorithm, which is used to add geometrical information during clustering. In addition this method improves belongingness constraints to avoid influence of noise on the result of segmentation. The algorithm is used in the project "Digital surveying of ancient architecture--- Forbidden City". Experiments show that the method is a good anti-noise, accuracy and adaptability and greater degree of human intervention is reduced. After segmentation internal point and point edge can be districted according membership of every point, so as to facilitate the follow-up to the surface feature extraction and model identification, and effective support for the three-dimensional model of the reconstruction of ancient buildings is provided.

  11. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  12. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  13. Optimizing the Point-In-Box Search Algorithm for the Cray Y-MP(TM) Supercomputer

    SciTech Connect

    Attaway, S.W.; Davis, M.E.; Heinstein, M.W.; Swegle, J.S.

    1998-12-23

    Determining the subset of points (particles) in a problem domain that are contained within certain spatial regions of interest can be one of the most time-consuming parts of some computer simulations. Examples where this 'point-in-box' search can dominate the computation time include (1) finite element contact problems; (2) molecular dynamics simulations; and (3) interactions between particles in numerical methods, such as discrete particle methods or smooth particle hydrodynamics. This paper describes methods to optimize a point-in-box search algorithm developed by Swegle that make optimal use of the architectural features of the Cray Y-MP Supercomputer.

  14. An affine point-set and line invariant algorithm for photo-identification of gray whales

    NASA Astrophysics Data System (ADS)

    Chandan, Chandan; Kehtarnavaz, Nasser; Hillman, Gilbert; Wursig, Bernd

    2004-05-01

    This paper presents an affine point-set and line invariant algorithm within a statistical framework, and its application to photo-identification of gray whales (Eschrichtius robustus). White patches (blotches) appearing on a gray whale's left and right flukes (the flattened broad paddle-like tail) constitute unique identifying features and have been used here for individual identification. The fluke area is extracted from a fluke image via the live-wire edge detection algorithm, followed by optimal thresholding of the fluke area to obtain the blotches. Affine point-set and line invariants of the blotch points are extracted based on three reference points, namely the left and right tips and the middle notch-like point on the fluke. A set of statistics is derived from the invariant values and used as the feature vector representing a database image. The database images are then ranked depending on the degree of similarity between a query and database feature vectors. The results show that the use of this algorithm leads to a reduction in the amount of manual search that is normally done by marine biologists.

  15. Optimal Parameter Exploration for Online Change-Point Detection in Activity Monitoring Using Genetic Algorithms

    PubMed Central

    Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris

    2016-01-01

    In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%. PMID:27792177

  16. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information

    PubMed Central

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  17. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information.

    PubMed

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms.

  18. The MATPHOT Algorithm for Accurate and Precise Stellar Photometry and Astrometry Using Discrete Point Spread Functions

    NASA Astrophysics Data System (ADS)

    Mighell, K. J.

    2004-12-01

    I describe the key features of my MATPHOT algorithm for accurate and precise stellar photometry and astrometry using discrete Point Spread Functions. A discrete Point Spread Function (PSF) is a sampled version of a continuous two-dimensional PSF. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or a FITS image file. The MATPHOT algorithm shifts discrete PSFs within an observational model using a 21-pixel-wide damped sinc function and position partial derivatives are computed using a five-point numerical differentiation formula. The MATPHOT algorithm achieves accurate and precise stellar photometry and astrometry of undersampled CCD observations by using supersampled discrete PSFs that are sampled 2, 3, or more times more finely than the observational data. I have written a C-language computer program called MPD which is based on the current implementation of the MATPHOT algorithm; all source code and documentation for MPD and support software is freely available at the following website: http://www.noao.edu/staff/mighell/matphot . I demonstrate the use of MPD and present a detailed MATPHOT analysis of simulated James Webb Space Telescope observations which demonstrates that millipixel relative astrometry and millimag photometric accuracy is achievable with very complicated space-based discrete PSFs. This work was supported by a grant from the National Aeronautics and Space Administration (NASA), Interagency Order No. S-13811-G, which was awarded by the Applied Information Systems Research (AISR) Program of NASA's Science Mission Directorate.

  19. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.

    PubMed

    Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun

    2017-08-29

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.

  20. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm

    PubMed Central

    Yan, Li; Xie, Hong; Chen, Changjun

    2017-01-01

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100

  1. A rapid and robust iterative closest point algorithm for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Barbiere, Joseph; Hanley, Joseph

    2008-03-01

    Our work presents a rapid and robust process that can analytically evaluate and correct patient setup error for head and neck radiotherapy by comparing orthogonal megavoltage portal images with digitally reconstructed radiographs. For robust data Photoshop is used to interactively segment images and registering reference contours to the transformed PI. MatLab is used for matrix computations and image analysis. The closest point distance for each PI point to a DRR point forms a set of homologous points. The translation that aligns the PI to the DRR is equal to the difference in centers of mass. The original PI points are transformed and the process repeated with an Iterative Closest Point algorithm until the transformation change becomes negligible. Using a 3.00 GHz processor the calculation of the 2500x1750 CPD matrix takes about 150 sec per iteration. Standard down sampling to about 1000 DRR and 250 PI points significantly reduces that time. We introduce a local neighborhood matrix consisting of a small subset of the DRR points in the vicinity of each PI point to further reduce the CPD matrix size. Our results demonstrate the effects of down sampling on accuracy. For validation, analytical detailed results are displayed as a histogram.

  2. Fixed-point analysis and realization of a blind beamforming algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Fu, Dengwei; Willson, Alan N.

    1999-11-01

    We present the fixed-point analysis and realization of a blind beamforming algorithm. This maximum-power beamforming algorithm consists of the computation of a correlation matrix and its dominant eigenvector, and we propose that the later be accomplished by the power method. After analyzing the numerical stability of the power method, we derive a division-free form of the algorithm. Based on a block-Toeplitz assumption, we design an FIR filter based system to realize both the correlation computation and the power method. Our ring processor, which is optimized to implement digital filters, is used as the core of the architecture. A special technique for dynamically switching filter inputs is shown to double the system throughput. Finally we discuss the issue of hardware/software hybrid realization.

  3. An algorithm for minimum-cost set-point ordering in a cryogenic wind tunnel

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1981-01-01

    An algorithm for minimum cost ordering of set points in a cryogenic wind tunnel is developed. The procedure generates a matrix of dynamic state transition costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-cost state-transition control strategies. A branch and bound algorithm is employed to determine the least costly sequence of state transitions from the transition-cost matrix. Some numerical results based on data for the National Transonic Facility are presented which show a strong preference for state transitions that consume to coolant. Results also show that the choice of the terminal set point in an open odering can produce a wide variation in total cost.

  4. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    NASA Technical Reports Server (NTRS)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  5. Sequential structural damage diagnosis algorithm using a change point detection method

    NASA Astrophysics Data System (ADS)

    Noh, H.; Rajagopal, R.; Kiremidjian, A. S.

    2013-11-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  6. Experimental comparison of filter algorithms for bare-Earth extraction from airborne laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Sithole, George; Vosselman, George

    Over the past years, several filters have been developed to extract bare-Earth points from point clouds. ISPRS Working Group III/3 conducted a test to determine the performance of these filters and the influence of point density thereon, and to identify directions for future research. Twelve selected datasets have been processed by eight participants. In this paper, the test results are presented. The paper describes the characteristics of the provided datasets and the used filter approaches. The filter performance is analysed both qualitatively and quantitatively. All filters perform well in smooth rural landscapes, but all produce errors in complex urban areas and rough terrain with vegetation. In general, filters that estimate local surfaces are found to perform best. The influence of point density could not well be determined in this experiment. Future research should be directed towards the usage of additional data sources, segment-based classification, and self-diagnosis of filter algorithms.

  7. An Efficient Implementation of the Sign LMS Algorithm Using Block Floating Point Format

    NASA Astrophysics Data System (ADS)

    Chakraborty, Mrityunjoy; Shaik, Rafiahamed; Lee, Moon Ho

    2007-12-01

    An efficient scheme is presented for implementing the sign LMS algorithm in block floating point format, which permits processing of data over a wide dynamic range at a processor complexity and cost as low as that of a fixed point processor. The proposed scheme adopts appropriate formats for representing the filter coefficients and the data. It also employs a scaled representation for the step-size that has a time-varying mantissa and also a time-varying exponent. Using these and an upper bound on the step-size mantissa, update relations for the filter weight mantissas and exponent are developed, taking care so that neither overflow occurs, nor are quantities which are already very small multiplied directly. Separate update relations are also worked out for the step size mantissa. The proposed scheme employs mostly fixed-point-based operations, and thus achieves considerable speedup over its floating-point-based counterpart.

  8. The expectation maximization algorithm applied to the search of point sources of astroparticles

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan Antonio; Hernández-Rey, Juan José

    2008-03-01

    The expectation-maximization algorithm, widely employed in cluster and pattern recognition analysis, is proposed in this article for the search of point sources of astroparticles. We show how to adapt the method for the particular case in which a faint source signal over a large background is expected. In particular, the method is applied to the point source search in neutrino telescopes. A generic neutrino telescope of an area of 1 km2 located in the Mediterranean Sea has been simulated. Results in terms of minimum detectable number of events are given and the method is compared advantageously with the results of a classical method with binning.

  9. Thickness Gauging of Single-Layer Conductive Materials with Two-Point Non Linear Calibration Algorithm

    NASA Technical Reports Server (NTRS)

    Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)

    1998-01-01

    A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.

  10. Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Ianculescu, G. D.; Klop, J. J.

    1992-01-01

    Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom are designed using a continuous rigid body model of the solar array gimbal assembly containing both linear and nonlinear dynamics due to various friction components. The robustness of the design solution is examined by performing a series of sensitivity analysis studies. Adaptive control strategies are examined in order to compensate for the unfavorable effect of static nonlinearities, such as dead-zone uncertainties.

  11. Extension of an iterative closest point algorithm for simultaneous localization and mapping in corridor environments

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua

    2016-03-01

    Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.

  12. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    PubMed Central

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  13. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    NASA Astrophysics Data System (ADS)

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-08-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.

  14. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  15. Lost in Virtual Reality: Pathfinding Algorithms Detect Rock Fractures and Contacts in Point Clouds

    NASA Astrophysics Data System (ADS)

    Thiele, S.; Grose, L.; Micklethwaite, S.

    2016-12-01

    UAV-based photogrammetric and LiDAR techniques provide high resolution 3D point clouds and ortho-rectified photomontages that can capture surface geology in outstanding detail over wide areas. Automated and semi-automated methods are vital to extract full value from these data in practical time periods, though the nuances of geological structures and materials (natural variability in colour and geometry, soft and hard linkage, shadows and multiscale properties) make this a challenging task. We present a novel method for computer assisted trace detection in dense point clouds, using a lowest cost path solver to "follow" fracture traces and lithological contacts between user defined end points. This is achieved by defining a local neighbourhood network where each point in the cloud is linked to its neighbours, and then using a least-cost path algorithm to search this network and estimate the trace of the fracture or contact. A variety of different algorithms can then be applied to calculate the best fit plane, produce a fracture network, or map properties such as roughness, curvature and fracture intensity. Our prototype of this method (Fig. 1) suggests the technique is feasible and remarkably good at following traces under non-optimal conditions such as variable-shadow, partial occlusion and complex fracturing. Furthermore, if a fracture is initially mapped incorrectly, the user can easily provide further guidance by defining intermediate waypoints. Future development will include optimization of the algorithm to perform well on large point clouds and modifications that permit the detection of features such as step-overs. We also plan on implementing this approach in an interactive graphical user environment.

  16. An Error Analysis of the Phased Array Antenna Pointing Algorithm for STARS Flight Demonstration No. 2

    NASA Technical Reports Server (NTRS)

    Carney, Michael P.; Simpson, James C.

    2005-01-01

    STARS is a multicenter NASA project to determine the feasibility of using space-based assets, such as the Tracking and Data Relay Satellite System (TDRSS) and Global Positioning System (GPS), to increase flexibility (e.g. increase the number of possible launch locations and manage simultaneous operations) and to reduce operational costs by decreasing the need for ground-based range assets and infrastructure. The STARS project includes two major systems: the Range Safety and Range User systems. The latter system uses broadband communications (125 kbps to 500 kbps) for voice, video, and vehicle/payload data. Flight Demonstration #1 revealed the need to increase the data rate of the Range User system. During Flight Demo #2, a Ku-band antenna will generate a higher data rate and will be designed with an embedded pointing algorithm to guarantee that the antenna is pointed directly at TDRS. This algorithm will utilize the onboard position and attitude data to point the antenna to TDRS within a 2-degree full-angle beamwidth. This report investigates how errors in aircraft position and attitude, along with errors in satellite position, propagate into the overall pointing vector.

  17. Floating-Point Units and Algorithms for field-programmable gate arrays

    SciTech Connect

    Underwood, Keith D.; Hemmert, K. Scott

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and we are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and

  18. TU-F-18A-04: Use of An Image-Based Material-Decomposition Algorithm for Multi-Energy CT to Determine Basis Material Densities

    SciTech Connect

    Li, Z; Leng, S; Yu, L; McCollough, C

    2014-06-15

    Purpose: Published methods for image-based material decomposition with multi-energy CT images have required the assumption of volume conservation or accurate knowledge of the x-ray spectra and detector response. The purpose of this work was to develop an image-based material-decomposition algorithm that can overcome these limitations. Methods: An image-based material decomposition algorithm was developed that requires only mass conservation (rather than volume conservation). With this method, using multi-energy CT measurements made with n=4 energy bins, the mass density of each basis material and of the mixture can be determined without knowledge of the tube spectra and detector response. A digital phantom containing 12 samples of mixtures from water, calcium, iron, and iodine was used in the simulation (Siemens DRASIM). The calibration was performed by using pure materials at each energy bin. The accuracy of the technique was evaluated in noise-free and noisy data under the assumption of an ideal photon-counting detector. Results: Basis material densities can be estimated accurately by either theoretic calculation or calibration with known pure materials. The calibration approach requires no prior information about the spectra and detector response. Regression analysis of theoretical values versus estimated values results in excellent agreement for both noise-free and noisy data. For the calibration approach, the R-square values are 0.9960+/−0.0025 and 0.9476+/−0.0363 for noise-free and noisy data, respectively. Conclusion: From multi-energy CT images with n=4 energy bins, the developed image-based material decomposition method accurately estimated 4 basis material density (3 without k-edge and 1 with in the range of the simulated energy bins) even without any prior information about spectra and detector response. This method is applicable to mixtures of solutions and dissolvable materials, where volume conservation assumptions do not apply. CHM receives

  19. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    DOE PAGES

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less

  20. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    SciTech Connect

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; Utke, Jean

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  1. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  2. Center Finding Algorithm on slit mask point source for IGRINS (Immersion GRating INfrared Spectrograph)

    NASA Astrophysics Data System (ADS)

    Lee, Hye-In; Pak, Soojong; Lee, Jae-Joon; Mace, Gregory N.; Jaffe, Daniel Thomas

    2017-06-01

    We developed an observation control software for the IGRINS (Immersion Grating Infrared Spectrograph) silt-viewing camera module, which points the astronomical target onto the spectroscopy slit and sends tracking feedbacks to the telescope control system (TCS). The point spread function (PSF) image is not following symmetric Gaussian profile. In addition, bright targets are easily saturated and shown as a donut shape. It is not trivial to define and find the center of the asymmetric PSF especially when most of the stellar PSF falls inside the slit. We made a center balancing algorithm (CBA) which derives the expected center position along the slit-width axis by referencing the stray flux ratios of both upper and lower sides of the slit. We compared accuracies of the CBA and those of a two-dimensional Gaussian fitting (2DGA) through simulations in order to evaluate the center finding algorithms. These methods were then verified with observational data. In this poster, we present the results of our tests and suggest a new algorithm for centering targets in the slit image of a spectrograph.

  3. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean

    2016-05-01

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  4. Ferromagnetic Mass Localization in Check Point Configuration Using a Levenberg Marquardt Algorithm

    PubMed Central

    Alimi, Roger; Geron, Nir; Weiss, Eyal; Ram-Cohen, Tsuriel

    2009-01-01

    A detection and tracking algorithm for ferromagnetic objects based on a two stage Levenberg Marquardt Algorithm (LMA) is presented. The procedure is applied to localization and magnetic moment estimation of ferromagnetic objects moving in the vicinity of an array of two to four 3-axis magnetometers arranged as a check point configuration. The algorithms first stage provides an estimation of the target trajectory and moment that are further refined using a second iteration where only the position vector is taken as unknown. The whole procedure is fast enough to provide satisfactory results within a few seconds after the target has been detected. Tests were conducted in Soreq NRC assessing various check point scenarios and targets. The results obtained from this experiment show good localization performance and good convivial with “noisy” environment. Small targets can be localized with good accuracy using either a vertical “doorway” two to four sensors configuration or ground level two to four sensors configuration. The calculated trajectory was not affected by nearby magnetic interference such as moving vehicles or a combat soldier inspecting the gateway. PMID:22291540

  5. Automatic Detection and Extraction Algorithm of Inter-Granular Bright Points

    NASA Astrophysics Data System (ADS)

    Feng, Song; Ji, Kai-fan; Deng, Hui; Wang, Feng; Fu, Xiao-dong

    2012-12-01

    Inter-granular Bright Points (igBPs) are small-scale objects in the Solar photosphere which can be seen within dark inter-granular lanes. We present a new algorithm to automatically detect and extract igBPs. Laplacian and Morphological Dilation (LMD) technique is employed by the algorithm. It involves three basic processing steps: (1) obtaining candidate ``seed" regions by Laplacian; (2) determining the boundary and size of igBPs by morphological dilation; (3) discarding brighter granules by a probability criterion. For validating our algorithm, we used the observed samples of the Dutch Open Telescope (DOT), collected on April 12, 2007. They contain 180 high-resolution images, and each has a 85 × 68 arcsec^{2} field of view (FOV). Two important results are obtained: first, the identified rate of igBPs reaches 95% and is higher than previous results; second, the diameter distribution is 220 ± 25 km, which is fully consistent with previously published data. We conclude that the presented algorithm can detect and extract igBPs automatically and effectively.

  6. Analytical evaluation of algorithms for point cloud surface reconstruction using shape features

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Verbeek, Fons J.

    2013-10-01

    In computer vision and graphics, reconstruction of a three-dimensional surface from a point cloud is a well-studied research area. As the surface contains information that can be measured, the application of surface reconstruction may be potentially important for applications in bioimaging. In the past decade, a number of algorithms for surface reconstruction have been developed. Generally speaking, these algorithms can be separated into two categories: explicit representation and implicit approximation. Most of these algorithms have a sound basis in mathematical theory. However, so far, no analytical evaluation between these algorithms has been presented. The straightforward method of evaluation has been by convincing through visual inspection. Therefore, we design an analytical approach by selecting surface distance, surface area, and surface curvature as three major surface descriptors. We evaluate these features in varied conditions. Our ground truth values are obtained from analytical shapes: the sphere, the ellipsoid, and the oval. Through evaluation we search for a method that can preserve the surface characteristics best and which is robust in the presence of noise. The results obtained from our experiments indicate that Poisson reconstruction method performs best. This outcome can now be used to produce reliable surface reconstruction of biological models.

  7. A new method for automatically measuring Vickers hardness based on region-point detection algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Yong; Shan, Yuekang; Ji, Yu; Zhang, Shibo

    2008-12-01

    This paper presents a new method for automatically analyzing the digital image of Vickers hardness indentation called Region-Point detection algorithm. This method effectively overcomes the error of vertex detection due to curving indentation edges. In the Region-Detection, to obtain four small regions where the four vertexes locate, Sobel Operator is implemented to extract the edge points and Thick-line Hough Transform is utilized to fit the edge lines, then the four regions are selected according to the four intersection points of the thick lines. In the Point-Detection, to get the vertex's accurate position in every small region, Thick-line Hough Transform is used again to select useful edge points and Last Square Method is utilized to accurately fit lines. The interception point of the two lines in every region is the vertex of indentation. Then the length of the diagonal and the Vickers hardness could be calculated. Experiments show that the measured values agreed well with the standard values

  8. MO-FG-204-03: Using Edge-Preserving Algorithm for Significantly Improved Image-Domain Material Decomposition in Dual Energy CT

    SciTech Connect

    Zhao, W; Niu, T; Xing, L; Xiong, G; Elmore, K; Min, J; Zhu, J; Wang, L

    2015-06-15

    Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leading resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR

  9. [Determination of Virtual Surgery Mass Point Spring Model Parameters Based on Genetic Algorithms].

    PubMed

    Chen, Ying; Hu, Xuyi; Zhu, Qiguang

    2015-12-01

    Mass point-spring model is one of the commonly used models in virtual surgery. However, its model parameters have no clear physical meaning, and it is hard to set the parameter conveniently. We, therefore, proposed a method based on genetic algorithm to determine the mass-spring model parameters. Computer-aided tomography (CAT) data were used to determine the mass value of the particle, and stiffness and damping coefficient were obtained by genetic algorithm. We used the difference between the reference deformation and virtual deformation as the fitness function to get the approximate optimal solution of the model parameters. Experimental results showed that this method could obtain an approximate optimal solution of spring parameters with lower cost, and could accurately reproduce the effect of the actual deformation model as well.

  10. [Automatic segmentation of clustered breast cancer cells based on modified watershed algorithm and concavity points searching].

    PubMed

    Tong, Zhen; Pu, Lixin; Dong, Fangjie

    2013-08-01

    As a common malignant tumor, breast cancer has seriously affected women's physical and psychological health even threatened their lives. Breast cancer has even begun to show a gradual trend of high incidence in some places in the world. As a kind of common pathological assist diagnosis technique, immunohistochemical technique plays an important role in the diagnosis of breast cancer. Usually, Pathologists isolate positive cells from the stained specimen which were processed by immunohistochemical technique and calculate the ratio of positive cells which is a core indicator of breast cancer in diagnosis. In this paper, we present a new algorithm which was based on modified watershed algorithm and concavity points searching to identify the positive cells and segment the clustered cells automatically, and then realize automatic counting. By comparison of the results of our experiments with those of other methods, our method can exactly segment the clustered cells without losing any geometrical cell features and give the exact number of separating cells.

  11. A spectral collocation algorithm for two-point boundary value problem in fiber Raman amplifier equations

    NASA Astrophysics Data System (ADS)

    Tarman, Hakan I.; Berberoğlu, Halil

    2009-04-01

    A novel algorithm implementing Chebyshev spectral collocation (pseudospectral) method in combination with Newton's method is proposed for the nonlinear two-point boundary value problem (BVP) arising in solving propagation equations in fiber Raman amplifier. Moreover, an algorithm to train the known linear solution for use as a starting solution for the Newton iteration is proposed and successfully implemented. The exponential accuracy obtained by the proposed Chebyshev pseudospectral method is demonstrated on a case of the Raman propagation equations with strong nonlinearities. This is in contrast to algebraic accuracy obtained by typical solvers used in the literature. The resolving power and the efficiency of the underlying Chebyshev grid are demonstrated in comparison to a known BVP solver.

  12. Bayesian inference of decomposition rate of soil organic carbon using a turnover model and a hybrid method of particle filter and MH algorithm

    NASA Astrophysics Data System (ADS)

    Sakurai, G.; Jomura, M.; Yonemura, S.; Iizumi, T.; Shirato, Y.; Yokozawa, M.

    2010-12-01

    The soils of terrestrial ecosystems accumulate large amounts of carbon and the response of soil organic carbon (SOC) to global warming is of great concern in projections of future carbon cycling. While many theoretical and experimental studies have suggested that the decomposition rates of soil organic matters depend upon the physical and chemical conditions, land managements and so on, there has not yet been consensus in the dependencies. Most of the soil carbon turnover models for describing the SOC dynamics do not assume the differences in decomposition rates. The purpose of this study is to evaluate the decomposition rates of SOC based on a soil carbon turnover model, RothC, which describes SOC dynamics dividing it into compartments with different decomposition rates. In this study, reflecting that decomposition rate could change with time due to the fertility management in arable land, we used time-dependent Bayesian inference methods to allow time-change variation of the parameters. Thus, we used a hybrid method of particle filtering methods and MH algorithm. We applied this method to datasets obtained from three long-term experiments on time changes in total SOC at five sites over the Japan mainland. For each dataset, three treatments were examined: no N applied, chemical fertilizer applied, and chemical fertilizer and farmyard manure applied. We estimated parameters on the temperature and water dependent functions as well as the intrinsic decomposition rate for each compartment of RothC and for each treatment. As a result, it was shown that the temperature dependencies tended to decreased with the decomposability of the compartment, i.e. lower temperature dependency for more recalcitrant compartment of the model. On the other hand, the water dependencies were not determined with the SOC turnover rates of the compartments. Additionally, the intrinsic decomposition rates tended to increase with time especially in no N applied treatment. This result reflects

  13. Mathematical detection of aortic valve opening (B point) in impedance cardiography: A comparison of three popular algorithms.

    PubMed

    Árbol, Javier Rodríguez; Perakakis, Pandelis; Garrido, Alba; Mata, José Luis; Fernández-Santaella, M Carmen; Vila, Jaime

    2017-03-01

    The preejection period (PEP) is an index of left ventricle contractility widely used in psychophysiological research. Its computation requires detecting the moment when the aortic valve opens, which coincides with the B point in the first derivative of impedance cardiogram (ICG). Although this operation has been traditionally made via visual inspection, several algorithms based on derivative calculations have been developed to enable an automatic performance of the task. However, despite their popularity, data about their empirical validation are not always available. The present study analyzes the performance in the estimation of the aortic valve opening of three popular algorithms, by comparing their performance with the visual detection of the B point made by two independent scorers. Algorithm 1 is based on the first derivative of the ICG, Algorithm 2 on the second derivative, and Algorithm 3 on the third derivative. Algorithm 3 showed the highest accuracy rate (78.77%), followed by Algorithm 1 (24.57%) and Algorithm 2 (13.82%). In the automatic computation of PEP, Algorithm 2 resulted in significantly more missed cycles (48.57%) than Algorithm 1 (6.3%) and Algorithm 3 (3.5%). Algorithm 2 also estimated a significantly lower average PEP (70 ms), compared with the values obtained by Algorithm 1 (119 ms) and Algorithm 3 (113 ms). Our findings indicate that the algorithm based on the third derivative of the ICG performs significantly better. Nevertheless, a visual inspection of the signal proves indispensable, and this article provides a novel visual guide to facilitate the manual detection of the B point. © 2016 Society for Psychophysiological Research.

  14. Statistical analysis of the characteristics of high degree polynomial solving methods used in the five-point algorithm

    NASA Astrophysics Data System (ADS)

    Ovchinkin, Anton; Ershov, Egor

    2017-02-01

    The five-point algorithm is an efficient way of evaluating camera motion parameters from five point pairs from two distinct views. However there is a need of tenth degree polynomial solving emerges during the computational process. In the paper we investigate the statistical properties of polynomial solvers used as a part of the five-point algorithm. We adduce the mathematical background of the problem and study briefly the main four polynomial solving methods. Finally, we investigate the essential characteristics of the algorithms such as parameters of distribution of an error value, rate of fails and average computation time. To evaluate the solvers we conduct an experiment using synthetic data.

  15. [An Improved Empirical Mode Decomposition Algorithm for Phonocardiogram Signal De-noising and Its Application in S1/S2 Extraction].

    PubMed

    Gong, Jing; Nie, Shengdong; Wang, Yuanjun

    2015-10-01

    In this paper, an improved empirical mode decomposition (EMD) algorithm for phonocardiogram (PCG) signal de-noising is proposed. Based on PCG signal processing theory, the S1/S2 components can be extracted by combining the improved EMD-Wavelet algorithm and Shannon energy envelope algorithm. Firstly, by applying EMD-Wavelet algorithm for pre-processing, the PCG signal was well filtered. Then, the filtered PCG signal was saved and applied in the following processing steps. Secondly, time domain features, frequency domain features and energy envelope of the each intrinsic mode function's (IMF) were computed. Based on the time frequency domain features of PCG's IMF components which were extracted from the EMD algorithm and energy envelope of the PCG, the S1/S2 components were pinpointed accurately. Meanwhile, a detecting fixed method, which was based on the time domain processing, was proposed to amend the detection results. Finally, to test the performance of the algorithm proposed in this paper, a series of experiments was contrived. The experiments with thirty samples were tested for validating the effectiveness of the new method. Results of test experiments revealed that the accuracy for recognizing S1/S2 components was as high as 99.75%. Comparing the results of the method proposed in this paper with those of traditional algorithm, the detection accuracy was increased by 5.56%. The detection results showed that the algorithm described in this paper was effective and accurate. The work described in this paper will be utilized in the further studying on identity recognition.

  16. A parallel point cloud clustering algorithm for subset segmentation and outlier detection

    NASA Astrophysics Data System (ADS)

    Teutsch, Christian; Trostmann, Erik; Berndt, Dirk

    2011-07-01

    We present a fast point cloud clustering technique which is suitable for outlier detection, object segmentation and region labeling for large multi-dimensional data sets. The basis is a minimal data structure similar to a kd-tree which enables us to detect connected subsets very fast. The proposed algorithms utilizing this tree structure are parallelizable which further increases the computation speed for very large data sets. The procedures given are a vital part of the data preprocessing. They improve the input data properties for a more reliable computation of surface measures, polygonal meshes and other visualization techniques. In order to show the effectiveness of our techniques we evaluate sets of point clouds from different 3D scanning devices.

  17. A linear programming based algorithm for determining corresponding point tuples in multiple vascular images

    NASA Astrophysics Data System (ADS)

    Singh, Vikas; Xu, Jinhui; Hoffmann, Kenneth R.; Noël, Peter B.; Walczak, Alan M.

    2006-03-01

    Multi-view imaging is the primary modality for high-spatial-resolution imaging of the vasculature. The 3D vascular structure can be reconstructed if the imaging geometries are determined using known corresponding point-pairs (or k-tuples) in two or more images. Because the accuracy improves with more input corresponding point-pairs, we propose a new technique to automatically determine corresponding point-pairs in multi-view (k-view) images, from 2D vessel image centerlines. We formulate the problem, first as a multi-partite graph-matching problem. Each 2D centerline point is a vertex; each individual graph contains all vessel-points (vertices) in an image. The weight ('cost') of the edges between vertices (in different graphs) is the shortest distance between the points' respective projection-lines. Using this construction, a universe of mappings (k-tuples) is created, each k-tuple having k vertices (one from each image). A k-tuple's weight is the sum of pair-wise 'costs' of its members. Ideally, a set of such mappings is desired that preserves the ordering of points along the vessel and minimizes an appropriate global cost function, such that all vertices (in all graphs) participate in at least one mapping. We formulate this problem as a special case of the well-studied Set-Cover problem with additional constraints. Then, the equivalent linear program is solved, and randomized-rounding techniques are used to yield a feasible set of mappings. Our algorithm is efficient and yields a theoretical quality guarantee. In simulations, the correct matching is achieved in ~98% cases, even with high input error. In clinical data, apparently correct matching is achieved in >90% cases. This method should provide the basis for improving the calculated 3D vasculature from multi-view data-sets.

  18. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    NASA Astrophysics Data System (ADS)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  19. Comparison of dermatoscopic diagnostic algorithms based on calculation: The ABCD rule of dermatoscopy, the seven-point checklist, the three-point checklist and the CASH algorithm in dermatoscopic evaluation of melanocytic lesions.

    PubMed

    Unlu, Ezgi; Akay, Bengu N; Erdem, Cengizhan

    2014-07-01

    Dermatoscopic analysis of melanocytic lesions using the CASH algorithm has rarely been described in the literature. The purpose of this study was to compare the sensitivity, specificity, and diagnostic accuracy rates of the ABCD rule of dermatoscopy, the seven-point checklist, the three-point checklist, and the CASH algorithm in the diagnosis and dermatoscopic evaluation of melanocytic lesions on the hairy skin. One hundred and fifteen melanocytic lesions of 115 patients were examined retrospectively using dermatoscopic images and compared with the histopathologic diagnosis. Four dermatoscopic algorithms were carried out for all lesions. The ABCD rule of dermatoscopy showed sensitivity of 91.6%, specificity of 60.4%, and diagnostic accuracy of 66.9%. The seven-point checklist showed sensitivity, specificity, and diagnostic accuracy of 87.5, 65.9, and 70.4%, respectively; the three-point checklist 79.1, 62.6, 66%; and the CASH algorithm 91.6, 64.8, and 70.4%, respectively. To our knowledge, this is the first study that compares the sensitivity, specificity and diagnostic accuracy of the ABCD rule of dermatoscopy, the three-point checklist, the seven-point checklist, and the CASH algorithm for the diagnosis of melanocytic lesions on the hairy skin. In our study, the ABCD rule of dermatoscopy and the CASH algorithm showed the highest sensitivity for the diagnosis of melanoma.

  20. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  1. GLOBAL PEAK ALIGNMENT FOR COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY USING POINT MATCHING ALGORITHMS

    PubMed Central

    Deng, Beichuan; Kim, Seongho; Li, Hengguang; Heath, Elisabeth; Zhang, Xiang

    2016-01-01

    Comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has been used to analyze multiple samples in a metabolomics study. However, due to some uncontrollable experimental conditions, such as the differences in temperature or pressure, matrix effects on samples, and stationary phase degradation, there is always a shift of retention times in the two GC columns between samples. In order to correct the retention time shifts in GC×GC-MS, the peak alignment is a crucial data analysis step to recognize the peaks generated by the same metabolite in different samples. Two approaches have been developed for GC×GC-MS data alignment: profile alignment and peak matching alignment. However, these existing alignment methods are all based on a local alignment, resulting that a peak may not be correctly aligned in a dense chromatographic region where many peaks are present in a small region. False alignment will result in false discovery in the downstream statistical analysis. We, therefore, develop a global comparison based peak alignment method using point matching algorithm (PMA-PA) for both homogeneous and heterogeneous data. The developed algorithm PMA-PA first extracts feature points (peaks) in the chromatography and then searches globally the matching peaks in the consecutive chromatography by adopting the projection of rigid and non-rigid transformation. PMA-PA is further applied to two real experimental data sets, showing that PMA-PA is a promising peak alignment algorithm for both homogenous and heterogeneous data in terms of F1 score, although it uses only peak location information. PMID:27650662

  2. The MATPHOT Algorithm for Digital Point Spread Function CCD Stellar Photometry

    NASA Astrophysics Data System (ADS)

    Mighell, Kenneth J.

    Most CCD stellar photometric reduction packages use analytical functions to represent the stellar Point Spread Function (PSF). These PSF-fitting programs generally compute all the major partial derivatives of the observational model by differentiating the volume integral of the PSF over a pixel. Real-world PSFs are frequently very complicated and may not be exactly representable with any combination of analytical functions. Deviations of the real-world PSF from the analytical PSF are then generally stored in a residual matrix. Diffraction rings and spikes can provide a great deal of information about the position of a star, yet information about such common observational effects generally resides only in the residual matrix. Such useful information is generally not used in the PSF-fitting process except for the final step involving the determination of the chi-square goodness-of-fit between the CCD observation and the model where the intensity-scaled residual matrix is added to the mathematical model of the observation just before the goodness-of-fit is computed. I describe some of the key features of my MATPHOT algorithm for digital PSF-fitting CCD stellar photometry where the PSF is represented by a matrix of numbers. The mathematics of determining the partial derivatives of the observational model with respect to the x and y direction vectors is exactly the same with analytical or digital PSFs. The implementation methodology, however, is quite different. In the case of digital PSFs, the partial derivatives can be determined using numerical differentiation techniques on the digital PSFs. I compare the advantages and disadvantages with respect to traditional PSF-fitting algorithms based on analytical representations of the PSF. The MATPHOT algorithm is an ideal candidate for parallel processing. Instead of operating in the traditional single-processor mode of analyzing one pixel at a time, the MATPHOT algorithm can be written to operate on an image-plane basis

  3. An efficient dynamic point algorithm for line-based collision detection in real time surgery simulation involving haptics.

    PubMed

    Maciel, Anderson; De, Suvranu

    2008-01-01

    In this paper, we introduce a novel "dynamic point" algorithm for computing the interaction of a line-shaped haptic cursor and polygonal surface models which has a near constant complexity. The algorithm is applied in laparoscopic surgery simulation for interaction of surgical instruments with physics-based deformable organ models.

  4. a Novel Image Registration Algorithm for SAR and Optical Images Based on Virtual Points

    NASA Astrophysics Data System (ADS)

    Ai, C.; Feng, T.; Wang, J.; Zhang, S.

    2013-07-01

    Optical image is rich in spectral information, while SAR instrument can work in both day and night and obtain images through fog and clouds. Combination of these two types of complementary images shows the great advantages of better image interpretation. Image registration is an inevitable and critical problem for the applications of multi-source remote sensing images, such as image fusion, pattern recognition and change detection. However, the different characteristics between SAR and optical images, which are due to the difference in imaging mechanism and the speckle noises in SAR image, bring great challenges to the multi-source image registration. Therefore, a novel image registration algorithm based on the virtual points, derived from the corresponding region features, is proposed in this paper. Firstly, image classification methods are adopted to extract closed regions from SAR and optical images respectively. Secondly, corresponding region features are matched by constructing cost function with rotate invariant region descriptors such as area, perimeter, and the length of major and minor axes. Thirdly, virtual points derived from corresponding region features, such as the centroids, endpoints and cross points of major and minor axes, are used to calculate initial registration parameters. Finally, the parameters are corrected by an iterative calculation, which will be terminated when the overlap of corresponding region features reaches its maximum. In the experiment, WordView-2 and Radasat-2 with 0.5 m and 4.7 m spatial resolution respectively, obtained in August 2010 in Suzhou, are used to test the registration method. It is shown that the multi-source image registration algorithm presented above is effective, and the accuracy of registration is up to pixel level.

  5. A deconvolution-based algorithm for crowded field photometry with unknown point spread function

    NASA Astrophysics Data System (ADS)

    Magain, P.; Courbin, F.; Gillon, M.; Sohy, S.; Letawe, G.; Chantry, V.; Letawe, Y.

    2007-01-01

    A new method is presented for determining the point spread function (PSF) of images that lack bright and isolated stars. It is based on the same principles as the MCS image deconvolution algorithm. It uses the information contained in all stellar images to achieve the double task of reconstructing the PSFs for single or multiple exposures of the same field and to extract the photometry of all point sources in the field of view. The use of the full information available allows us to construct an accurate PSF. The possibility to simultaneously consider several exposures makes it well suited to the measurement of the light curves of blended point sources from data that would be very difficult or even impossible to analyse with traditional PSF fitting techniques. The potential of the method for the analysis of ground-based and space-based data is tested on artificial images and illustrated by several examples, including HST/NICMOS images of a lensed quasar and VLT/ISAAC images of a faint blended Mira star in the halo of the giant elliptical galaxy NGC 5128 (Cen A).

  6. Using SDO and GONG as Calibration References for a New Telescope Pointing Algorithm

    NASA Astrophysics Data System (ADS)

    Staiger, J.

    2013-12-01

    Long duration observations are a basic requirement for most types of helioseismic measurements. Pointing stability and the quality of guiding is thus an important issue with respect to the spatio-temporal analysis of any velocity datasets. Existing pointing tools and correlation-tracking devices will help to remove most of the spatial deviations building up during an observation with time. Yet most ground- and space-based high-resolution solar telescopes may be subject to slow image-plane drift that cannot be compensated for by guiding and which may accumulate to displacements of 10″ or more during a 10-hour recording. We have developed a new pointing model for solar telescopes that may overcome these inherent guiding-limitations. We have tested the model at the Vacuum Tower Telescope (VTT), Tenerife. We are using SDO and GONG full-disk imaging as a calibration reference. We describe the algorithms developed and used during the tests. We present our first results. We describe possible future applications as to be implemented at the VTT. So far, improvements over classical limb-guider systems by a factor of 10 or more seem possible.

  7. A Full-Newton Step Infeasible Interior-Point Algorithm for Linear Programming Based on a Kernel Function

    SciTech Connect

    Liu Zhongyi Sun, Wenyu Tian Fangbao

    2009-10-15

    This paper proposes an infeasible interior-point algorithm with full-Newton step for linear programming, which is an extension of the work of Roos (SIAM J. Optim. 16(4):1110-1136, 2006). The main iteration of the algorithm consists of a feasibility step and several centrality steps. We introduce a kernel function in the algorithm to induce the feasibility step. For parameter p element of [0,1], the polynomial complexity can be proved and the result coincides with the best result for infeasible interior-point methods, that is, O(nlog n/{epsilon})

  8. GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie

    2016-04-01

    Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.

  9. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  10. Sunspots and Coronal Bright Points Tracking using a Hybrid Algorithm of PSO and Active Contour Model

    NASA Astrophysics Data System (ADS)

    Dorotovic, I.; Shahamatnia, E.; Lorenc, M.; Rybansky, M.; Ribeiro, R. A.; Fonseca, J. M.

    2014-02-01

    In the last decades there has been a steady increase of high-resolution data, from ground-based and space-borne solar instruments, and also of solar data volume. These huge image archives require efficient automatic image processing software tools capable of detecting and tracking various features in the solar atmosphere. Results of application of such tools are essential for studies of solar activity evolution, climate change understanding and space weather prediction. The follow up of interplanetary and near-Earth phenomena requires, among others, automatic tracking algorithms that can determine where a feature is located, on successive images taken along the period of observation. Full-disc solar images, obtained both with the ground-based solar telescopes and the instruments onboard the satellites, provide essential observational material for solar physicists and space weather researchers for better understanding the Sun, studying the evolution of various features in the solar atmosphere, and also investigating solar differential rotation by tracking such features along time. Here we demonstrate and discuss the suitability of applying a hybrid Particle Swarm Optimization (PSO) algorithm and Active Contour model for tracking and determining the differential rotation of sunspots and coronal bright points (CBPs) on a set of selected solar images. The results obtained confirm that the proposed approach constitutes a promising tool for investigating the evolution of solar activity and also for automating tracking features on massive solar image archives.

  11. Genetic algorithm optimization of point charges in force field development: challenges and insights.

    PubMed

    Ivanov, Maxim V; Talipov, Marat R; Timerghazin, Qadir K

    2015-02-26

    Evolutionary methods, such as genetic algorithms (GAs), provide powerful tools for optimization of the force field parameters, especially in the case of simultaneous fitting of the force field terms against extensive reference data. However, GA fitting of the nonbonded interaction parameters that includes point charges has not been explored in the literature, likely due to numerous difficulties with even a simpler problem of the least-squares fitting of the atomic point charges against a reference molecular electrostatic potential (MEP), which often demonstrates an unusually high variation of the fitted charges on buried atoms. Here, we examine the performance of the GA approach for the least-squares MEP point charge fitting, and show that the GA optimizations suffer from a magnified version of the classical buried atom effect, producing highly scattered yet correlated solutions. This effect can be understood in terms of the linearly independent, natural coordinates of the MEP fitting problem defined by the eigenvectors of the least-squares sum Hessian matrix, which are also equivalent to the eigenvectors of the covariance matrix evaluated for the scattered GA solutions. GAs quickly converge with respect to the high-curvature coordinates defined by the eigenvectors related to the leading terms of the multipole expansion, but have difficulty converging with respect to the low-curvature coordinates that mostly depend on the buried atom charges. The performance of the evolutionary techniques dramatically improves when the point charge optimization is performed using the Hessian or covariance matrix eigenvectors, an approach with a significant potential for the evolutionary optimization of the fixed-charge biomolecular force fields.

  12. A fast algorithm for finding point sources in the Fermi data stream: FermiFAST

    NASA Astrophysics Data System (ADS)

    Asvathaman, Asha; Omand, Conor; Barton, Alistair; Heyl, Jeremy S.

    2017-04-01

    We present a new and efficient algorithm for finding point sources in the photon event data stream from the Fermi Gamma-Ray Space Telescope, FermiFAST. The key advantage of FermiFAST is that it constructs a catalogue of potential sources very fast by arranging the photon data in a hierarchical data structure. Using this structure, FermiFAST rapidly finds the photons that could have originated from a potential gamma-ray source. It calculates a likelihood ratio for the contribution of the potential source using the angular distribution of the photons within the region of interest. It can find within a few minutes the most significant half of the Fermi Third Point Source catalogue (3FGL) with nearly 80 per cent purity from the 4 yr of data used to construct the catalogue. If a higher purity sample is desirable, one can achieve a sample that includes the most significant third of the Fermi 3FGL with only 5 per cent of the sources unassociated with Fermi sources. Outside the Galactic plane, all but eight of the 580 FermiFAST detections are associated with 3FGL sources. And of these eight, six yield significant detections of greater than 5σ when a further binned likelihood analysis is performed. This software allows for rapid exploration of the Fermi data, simulation of the source detection to calculate the selection function of various sources and the errors in the obtained parameters of the sources detected.

  13. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    NASA Astrophysics Data System (ADS)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  14. Joint inversion of T1-T2 spectrum combining the iterative truncated singular value decomposition and the parallel particle swarm optimization algorithms

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Wang, Hua; Fan, Yiren; Cao, Yingchang; Chen, Hua; Huang, Rui

    2016-01-01

    With more information than the conventional one dimensional (1D) longitudinal relaxation time (T1) and transversal relaxation time (T2) spectrums, a two dimensional (2D) T1-T2 spectrum in a low field nuclear magnetic resonance (NMR) is developed to discriminate the relaxation components of fluids such as water, oil and gas in porous rock. However, the accuracy and efficiency of the T1-T2 spectrum are limited by the existing inversion algorithms and data acquisition schemes. We introduce a joint method to inverse the T1-T2 spectrum, which combines iterative truncated singular value decomposition (TSVD) and a parallel particle swarm optimization (PSO) algorithm to get fast computational speed and stable solutions. We reorganize the first kind Fredholm integral equation of two kernels to a nonlinear optimization problem with non-negative constraints, and then solve the ill-conditioned problem by the iterative TSVD. Truncating positions of the two diagonal matrices are obtained by the Akaike information criterion (AIC). With the initial values obtained by TSVD, we use a PSO with parallel structure to get the global optimal solutions with a high computational speed. We use the synthetic data with different signal to noise ratio (SNR) to test the performance of the proposed method. The result shows that the new inversion algorithm can achieve favorable solutions for signals with SNR larger than 10, and the inversion precision increases with the decrease of the components of the porous rock.

  15. Modular polynomial arithmetic in partial fraction decomposition

    NASA Technical Reports Server (NTRS)

    Abdali, S. K.; Caviness, B. F.; Pridor, A.

    1977-01-01

    Algorithms for general partial fraction decomposition are obtained by using modular polynomial arithmetic. An algorithm is presented to compute inverses modulo a power of a polynomial in terms of inverses modulo that polynomial. This algorithm is used to make an improvement in the Kung-Tong partial fraction decomposition algorithm.

  16. Nested Taylor decomposition in multivariate function decomposition

    NASA Astrophysics Data System (ADS)

    Baykara, N. A.; Gürvit, Ercan

    2014-12-01

    Fluctuationlessness approximation applied to the remainder term of a Taylor decomposition expressed in integral form is already used in many articles. Some forms of multi-point Taylor expansion also are considered in some articles. This work is somehow a combination these where the Taylor decomposition of a function is taken where the remainder is expressed in integral form. Then the integrand is decomposed to Taylor again, not necessarily around the same point as the first decomposition and a second remainder is obtained. After taking into consideration the necessary change of variables and converting the integration limits to the universal [0;1] interval a multiple integration system formed by a multivariate function is formed. Then it is intended to apply the Fluctuationlessness approximation to each of these integrals one by one and get better results as compared with the single node Taylor decomposition on which the Fluctuationlessness is applied.

  17. Using second-order calibration method based on trilinear decomposition algorithms coupled with high performance liquid chromatography with diode array detector for determination of quinolones in honey samples.

    PubMed

    Yu, Yong-Jie; Wu, Hai-Long; Shao, Sheng-Zhi; Kang, Chao; Zhao, Juan; Wang, Yu; Zhu, Shao-Hua; Yu, Ru-Qin

    2011-09-15

    A novel strategy that combines the second-order calibration method based on the trilinear decomposition algorithms with high performance liquid chromatography with diode array detector (HPLC-DAD) was developed to mathematically separate the overlapped peaks and to quantify quinolones in honey samples. The HPLC-DAD data were obtained within a short time in isocratic mode. The developed method could be applied to determine 12 quinolones at the same time even in the presence of uncalibrated interfering components in complex background. To access the performance of the proposed strategy for the determination of quinolones in honey samples, the figures of merit were employed. The limits of quantitation for all analytes were within the range 1.2-56.7 μg kg(-1). The work presented in this paper illustrated the suitability and interesting potential of combining second-order calibration method with second-order analytical instrument for multi-residue analysis in honey samples.

  18. Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data

    NASA Astrophysics Data System (ADS)

    Alcolea, Andres; Renard, Philippe

    2010-08-01

    Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations

  19. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  20. An algorithm to locate optimal bond breaking points on a potential energy surface for applications in mechanochemistry and catalysis

    NASA Astrophysics Data System (ADS)

    Bofill, Josep Maria; Ribas-Ariño, Jordi; García, Sergio Pablo; Quapp, Wolfgang

    2017-10-01

    The reaction path of a mechanically induced chemical transformation changes under stress. It is well established that the force-induced structural changes of minima and saddle points, i.e., the movement of the stationary points on the original or stress-free potential energy surface, can be described by a Newton Trajectory (NT). Given a reactive molecular system, a well-fitted pulling direction, and a sufficiently large value of the force, the minimum configuration of the reactant and the saddle point configuration of a transition state collapse at a point on the corresponding NT trajectory. This point is called barrier breakdown point or bond breaking point (BBP). The Hessian matrix at the BBP has a zero eigenvector which coincides with the gradient. It indicates which force (both in magnitude and direction) should be applied to the system to induce the reaction in a barrierless process. Within the manifold of BBPs, there exist optimal BBPs which indicate what is the optimal pulling direction and what is the minimal magnitude of the force to be applied for a given mechanochemical transformation. Since these special points are very important in the context of mechanochemistry and catalysis, it is crucial to develop efficient algorithms for their location. Here, we propose a Gauss-Newton algorithm that is based on the minimization of a positively defined function (the so-called σ -function). The behavior and efficiency of the new algorithm are shown for 2D test functions and for a real chemical example.

  1. Decomposition of MATLAB script for FPGA implementation of real time simulation algorithms for LLRF system in European XFEL

    NASA Astrophysics Data System (ADS)

    Bujnowski, K.; Pucyk, P.; Pozniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The European XFEL project uses the LLRF system for stabilization of a vector sum of the RF field in 32 superconducting cavities. A dedicated, high performance photonics and electronics and software was built. To provide high system availability an appropriate test environment as well as diagnostics was designed. A real time simulation subsystem was designed which is based on dedicated electronics using FPGA technology and robust simulation models implemented in VHDL. The paper presents an architecture of the system framework which allows for easy and flexible conversion of MATLAB language structures directly into FPGA implementable grid of parameterized and simple DSP processors. The decomposition of MATLAB grammar was described as well as optimization process and FPGA implementation issues.

  2. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  3. A real-time plane-wave decomposition algorithm for characterizing perforated liners damping at multiple mode frequencies.

    PubMed

    Zhao, Dan

    2011-03-01

    Perforated liners with a narrow frequency range are widely used as acoustic dampers to stabilize combustion systems. When the frequency of unstable modes present in the combustion system is within the effective frequency range, the liners can efficiently dissipate acoustic waves. The fraction of the incident waves being absorbed (known as power absorption coefficient) is generally used to characterize the liners damping. To estimate it, plane waves either side of the liners need to be decomposed and characterized. For this, a real-time algorithm is developed. Emphasis is being placed on its ability to online decompose plane waves at multiple mode frequencies. The performance of the algorithm is evaluated first in a numerical model with two unstable modes. It is then experimentally implemented in an acoustically driven pipe system with a lined section attached. The acoustic damping of perforated liners is continuously characterized in real-time. Comparison is then made between the results from the algorithm and those from the short-time fast Fourier transform (FFT)-based techniques, which are typically used in industry. It was found that the real-time algorithm allows faster tracking of the liners damping, even when the forcing frequency was suddenly changed.

  4. Melting point prediction employing k-nearest neighbor algorithms and genetic parameter optimization.

    PubMed

    Nigsch, Florian; Bender, Andreas; van Buuren, Bernd; Tissen, Jos; Nigsch, Eduard; Mitchell, John B O

    2006-01-01

    We have applied the k-nearest neighbor (kNN) modeling technique to the prediction of melting points. A data set of 4119 diverse organic molecules (data set 1) and an additional set of 277 drugs (data set 2) were used to compare performance in different regions of chemical space, and we investigated the influence of the number of nearest neighbors using different types of molecular descriptors. To compute the prediction on the basis of the melting temperatures of the nearest neighbors, we used four different methods (arithmetic and geometric average, inverse distance weighting, and exponential weighting), of which the exponential weighting scheme yielded the best results. We assessed our model via a 25-fold Monte Carlo cross-validation (with approximately 30% of the total data as a test set) and optimized it using a genetic algorithm. Predictions for drugs based on drugs (separate training and test sets each taken from data set 2) were found to be considerably better [root-mean-squared error (RMSE)=46.3 degrees C, r2=0.30] than those based on nondrugs (prediction of data set 2 based on the training set from data set 1, RMSE=50.3 degrees C, r2=0.20). The optimized model yields an average RMSE as low as 46.2 degrees C (r2=0.49) for data set 1, and an average RMSE of 42.2 degrees C (r2=0.42) for data set 2. It is shown that the kNN method inherently introduces a systematic error in melting point prediction. Much of the remaining error can be attributed to the lack of information about interactions in the liquid state, which are not well-captured by molecular descriptors.

  5. Current review and a simplified "five-point management algorithm" for keratoconus.

    PubMed

    Shetty, Rohit; Kaweri, Luci; Pahuja, Natasha; Nagaraja, Harsha; Wadia, Kareeshma; Jayadev, Chaitra; Nuijts, Rudy; Arora, Vishal

    2015-01-01

    Keratoconus is a slowly progressive, noninflammatory ectatic corneal disease characterized by changes in corneal collagen structure and organization. Though the etiology remains unknown, novel techniques are continuously emerging for the diagnosis and management of the disease. Demographical parameters are known to affect the rate of progression of the disease. Common methods of vision correction for keratoconus range from spectacles and rigid gas-permeable contact lenses to other specialized lenses such as piggyback, Rose-K or Boston scleral lenses. Corneal collagen cross-linking is effective in stabilizing the progression of the disease. Intra-corneal ring segments can improve vision by flattening the cornea in patients with mild to moderate keratoconus. Topography-guided custom ablation treatment betters the quality of vision by correcting the refractive error and improving the contact lens fit. In advanced keratoconus with corneal scarring, lamellar or full thickness penetrating keratoplasty will be the treatment of choice. With such a wide spectrum of alternatives available, it is necessary to choose the best possible treatment option for each patient. Based on a brief review of the literature and our own studies we have designed a five-point management algorithm for the treatment of keratoconus.

  6. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    NASA Technical Reports Server (NTRS)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  7. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    presentehat shows the readback delay does not have a negative impact on gimbal control. The decision was made to consider implementing two of the jitter mitigation techniques on board the spacecraft: stagger stepping and the NSR. Flight data from two sets of handovers, one set without jitter mitigation and the other with mitigation enabled, were examined. The trajectory of the predicted handover was compared with the measured trajectory for the two cases, showing that tracking was not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. In this paper, the flight results are examined from a test where the HGAs are following the path of a nominal handover with stagger stepping on and HMI NSRs enabled. In this case, the reaction wheels are moving at low speed and the instruments are taking pictures in their standard sequence. The flight data shows the level of jitter that the instruments see when their shutters are open. The HGA-induced jitter is well within the jitter requirement when the stagger step and NSR mitigation options are enabled. The SDO HGA pointing algorithm was designed to achieve nominal antenna pointing at the ground station, perform slews during handover season, and provide three HGA-induced jitter mitigation options without compromising pointing objectives. During the commissioning phase, flight data sets were collected to verify the HGA pointing algorithm and demonstrate its jitter mitigation capabilities.

  8. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.

    PubMed

    Karthikeyan, M; Raja, T Sree Ranga

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.

  9. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  10. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    SciTech Connect

    Chao, R.M.; Ko, S.H.; Lin, I.H.; Pai, F.S.; Chang, C.C.

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  11. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Clinical Evaluation.

    PubMed

    Hyodo, Tomoko; Yada, Norihisa; Hori, Masatoshi; Maenishi, Osamu; Lamb, Peter; Sasaki, Kosuke; Onoda, Minori; Kudo, Masatoshi; Mochizuki, Teruhito; Murakami, Takamichi

    2017-04-01

    Purpose To assess the clinical accuracy and reproducibility of liver fat quantification with the multimaterial decomposition (MMD) algorithm, comparing the performance of MMD with that of magnetic resonance (MR) spectroscopy by using liver biopsy as the reference standard. Materials and Methods This prospective study was approved by the institutional ethics committee, and patients provided written informed consent. Thirty-three patients suspected of having hepatic steatosis underwent non-contrast material-enhanced and triple-phase dynamic contrast-enhanced dual-energy computed tomography (CT) (80 and 140 kVp) and single-voxel proton MR spectroscopy within 30 days before liver biopsy. Percentage fat volume fraction (FVF) images were generated by using the MMD algorithm on dual-energy CT data to measure hepatic fat content. FVFs determined by using dual-energy CT and percentage fat fractions (FFs) determined by using MR spectroscopy were compared with histologic steatosis grade (0-3, as defined by the nonalcoholic fatty liver disease activity score system) by using Jonckheere-Terpstra trend tests and were compared with each other by using Bland-Altman analysis. Real non-contrast-enhanced FVFs were compared with triple-phase contrast-enhanced FVFs to determine the reproducibility of MMD by using Bland-Altman analyses. Results Both dual-energy CT FVF and MR spectroscopy FF increased with increasing histologic steatosis grade (trend test, P < .001 for each). The Bland-Altman plot of dual-energy CT FVF and MR spectroscopy FF revealed a proportional bias, as indicated by the significant positive slope of the line regressing the difference on the average (P < .001). The 95% limits of agreement for the differences between real non-contrast-enhanced and contrast-enhanced FVFs were not greater than about 2%. Conclusion The MMD algorithm quantifying hepatic fat in dual-energy CT images is accurate and reproducible across imaging phases. (©) RSNA, 2017 Online supplemental

  12. A generalized version of a two point boundary value problem guidance algorithm

    NASA Astrophysics Data System (ADS)

    Kelly, W. D.

    An iterative guidance algorithm known as a minimum Hamiltonian method is used for performance analyses of launch vehicles in personal-computer trajectory simulations. Convergence in this application is rapid for a minimum-time-of-flight upper-stage solution. Examination of the coded algorithm resulted in a reformulation in which problem-specific portions of the code were separated from portions that were shared by problems in general. More generalized problem inputs were included to operate the algorithm based on varied numbers of state variables, terminal constraints, and controls, preparing for other applications the basic algorithm applied to ascent guidance. In most cases, including entry, the compact form of the algorithm along with the capability to converge rapidly makes it a contender for autonomous guidance aboard a powered flight vehicle.

  13. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis

    NASA Astrophysics Data System (ADS)

    Portes, Leonardo L.; Aguirre, Luis A.

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011), 10.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA.

  14. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis.

    PubMed

    Portes, Leonardo L; Aguirre, Luis A

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011)PLEEE81539-375510.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA.

  15. MODIS 250m burned area mapping based on an algorithm using change point detection and Markov random fields.

    NASA Astrophysics Data System (ADS)

    Mota, Bernardo; Pereira, Jose; Campagnolo, Manuel; Killick, Rebeca

    2013-04-01

    Area burned in tropical savannas of Brazil was mapped using MODIS-AQUA daily 250m resolution imagery by adapting one of the European Space Agency fire_CCI project burned area algorithms, based on change point detection and Markov random fields. The study area covers 1,44 Mkm2 and was performed with data from 2005. The daily 1000 m image quality layer was used for cloud and cloud shadow screening. The algorithm addresses each pixel as a time series and detects changes in the statistical properties of NIR reflectance values, to identify potential burning dates. The first step of the algorithm is robust filtering, to exclude outlier observations, followed by application of the Pruned Exact Linear Time (PELT) change point detection technique. Near-infrared (NIR) spectral reflectance changes between time segments, and post change NIR reflectance values are combined into a fire likelihood score. Change points corresponding to an increase in reflectance are dismissed as potential burn events, as are those occurring outside of a pre-defined fire season. In the last step of the algorithm, monthly burned area probability maps and detection date maps are converted to dichotomous (burned-unburned maps) using Markov random fields, which take into account both spatial and temporal relations in the potential burned area maps. A preliminary assessment of our results is performed by comparison with data from the MODIS 1km active fires and the 500m burned area products, taking into account differences in spatial resolution between the two sensors.

  16. Point matching under non-uniform distortions and protein side chain packing based on an efficient maximum clique algorithm.

    PubMed

    Dukka, Bahadur K C; Akutsu, Tatsuya; Tomita, Etsuji; Seki, Tomokazu; Fujiyama, Asao

    2002-01-01

    We developed maximum clique-based algorithms for spot matching for two-dimensional gel electrophoresis images, protein structure alignment and protein side-chain packing, where these problems are known to be NP-hard. Algorithms based on direct reductions to the maximum clique can find optimal solutions for instances of size (the number of points or residues) up to 50-150 using a standard PC. We also developed pre-processing techniques to reduce the sizes of graphs. Combined with some heuristics, many realistic instances can be solved approximately.

  17. a New Control Points Based Geometric Correction Algorithm for Airborne Push Broom Scanner Images Without On-Board Data

    NASA Astrophysics Data System (ADS)

    Strakhov, P.; Badasen, E.; Shurygin, B.; Kondranin, T.

    2016-06-01

    Push broom scanners, such as video spectrometers (also called hyperspectral sensors), are widely used in the present. Usage of scanned images requires accurate geometric correction, which becomes complicated when imaging platform is airborne. This work contains detailed description of a new algorithm developed for processing of such images. The algorithm requires only user provided control points and is able to correct distortions caused by yaw, flight speed and height changes. It was tested on two series of airborne images and yielded RMS error values on the order of 7 meters (3-6 source image pixels) as compared to 13 meters for polynomial-based correction.

  18. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  19. The collapsed cone algorithm for 192Ir dosimetry using phantom-size adaptive multiple-scatter point kernels

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-01

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  20. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    PubMed

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  1. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  2. Comparison of point target detection algorithms for space-based scanning infrared sensors

    NASA Astrophysics Data System (ADS)

    Namoos, Omar M.; Schulenburg, Nielson W.

    1995-09-01

    The tracking of resident space objects (RSO) by space-based sensors can lead to engagements that result in stressing backgrounds. These backgrounds, including hard earth, earth limb, and zodiacal, pose various difficulties for signal processing algorithms designed to detect and track the target with a minimum of false alarms. Simulated RSO engagements were generated using the Strategic Scene Generator Model and a sensor model to create focal plane scenes. Using this data, the performance of several detection algorithms has been quantified for space, earth limb and cluttered hard earth backgrounds. These algorithms consist of an adaptive spatial filter, a transversal (matched) filters, and a median variance (nonlinear) filter. Signal-to-clutter statistics of the filtered scenes are compared to those of the unfiltered scene. False alarm and detection results are included. Based on these findings, a suggested processing software architectures design is hypothesized.

  3. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    NASA Astrophysics Data System (ADS)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  4. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    SciTech Connect

    Atanassov, E.; Dimitrov, D. E-mail: emanouil@parallel.bas.bg Gurov, T.

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  5. Algorithms for the analysis of ensemble neural spiking activity using simultaneous-event multivariate point-process models

    PubMed Central

    Ba, Demba; Temereanca, Simona; Brown, Emery N.

    2014-01-01

    Understanding how ensembles of neurons represent and transmit information in the patterns of their joint spiking activity is a fundamental question in computational neuroscience. At present, analyses of spiking activity from neuronal ensembles are limited because multivariate point process (MPP) models cannot represent simultaneous occurrences of spike events at an arbitrarily small time resolution. Solo recently reported a simultaneous-event multivariate point process (SEMPP) model to correct this key limitation. In this paper, we show how Solo's discrete-time formulation of the SEMPP model can be efficiently fit to ensemble neural spiking activity using a multinomial generalized linear model (mGLM). Unlike existing approximate procedures for fitting the discrete-time SEMPP model, the mGLM is an exact algorithm. The MPP time-rescaling theorem can be used to assess model goodness-of-fit. We also derive a new marked point-process (MkPP) representation of the SEMPP model that leads to new thinning and time-rescaling algorithms for simulating an SEMPP stochastic process. These algorithms are much simpler than multivariate extensions of algorithms for simulating a univariate point process, and could not be arrived at without the MkPP representation. We illustrate the versatility of the SEMPP model by analyzing neural spiking activity from pairs of simultaneously-recorded rat thalamic neurons stimulated by periodic whisker deflections, and by simulating SEMPP data. In the data analysis example, the SEMPP model demonstrates that whisker motion significantly modulates simultaneous spiking activity at the 1 ms time scale and that the stimulus effect is more than one order of magnitude greater for simultaneous activity compared with non-simultaneous activity. Together, the mGLM, the MPP time-rescaling theorem and the MkPP representation of the SEMPP model offer a theoretically sound, practical tool for measuring joint spiking propensity in a neuronal ensemble. PMID:24575001

  6. Algorithms for the analysis of ensemble neural spiking activity using simultaneous-event multivariate point-process models.

    PubMed

    Ba, Demba; Temereanca, Simona; Brown, Emery N

    2014-01-01

    Understanding how ensembles of neurons represent and transmit information in the patterns of their joint spiking activity is a fundamental question in computational neuroscience. At present, analyses of spiking activity from neuronal ensembles are limited because multivariate point process (MPP) models cannot represent simultaneous occurrences of spike events at an arbitrarily small time resolution. Solo recently reported a simultaneous-event multivariate point process (SEMPP) model to correct this key limitation. In this paper, we show how Solo's discrete-time formulation of the SEMPP model can be efficiently fit to ensemble neural spiking activity using a multinomial generalized linear model (mGLM). Unlike existing approximate procedures for fitting the discrete-time SEMPP model, the mGLM is an exact algorithm. The MPP time-rescaling theorem can be used to assess model goodness-of-fit. We also derive a new marked point-process (MkPP) representation of the SEMPP model that leads to new thinning and time-rescaling algorithms for simulating an SEMPP stochastic process. These algorithms are much simpler than multivariate extensions of algorithms for simulating a univariate point process, and could not be arrived at without the MkPP representation. We illustrate the versatility of the SEMPP model by analyzing neural spiking activity from pairs of simultaneously-recorded rat thalamic neurons stimulated by periodic whisker deflections, and by simulating SEMPP data. In the data analysis example, the SEMPP model demonstrates that whisker motion significantly modulates simultaneous spiking activity at the 1 ms time scale and that the stimulus effect is more than one order of magnitude greater for simultaneous activity compared with non-simultaneous activity. Together, the mGLM, the MPP time-rescaling theorem and the MkPP representation of the SEMPP model offer a theoretically sound, practical tool for measuring joint spiking propensity in a neuronal ensemble.

  7. Off-axis phase-only holograms of 3D objects using accelerated point-based Fresnel diffraction algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Zhenxiang; Zheng, Huadong; Yu, Yingjie; Asundi, Anand K.

    2017-06-01

    A method for calculating off-axis phase-only holograms of three-dimensional (3D) object using accelerated point-based Fresnel diffraction algorithm (PB-FDA) is proposed. The complex amplitude of the object points on the z-axis in hologram plane is calculated using Fresnel diffraction formula, called principal complex amplitudes (PCAs). The complex amplitudes of those off-axis object points of the same depth can be obtained by 2D shifting of PCAs. In order to improve the calculating speed of the PB-FDA, the convolution operation based on fast Fourier transform (FFT) is used to calculate the holograms rather than using the point-by-point spatial 2D shifting of the PCAs. The shortest recording distance of the PB-FDA is analyzed in order to remove the influence of multiple-order images in reconstructed images. The optimal recording distance of the PB-FDA is also analyzed to improve the quality of reconstructed images. Numerical reconstructions and optical reconstructions with a phase-only spatial light modulator (SLM) show that holographic 3D display is feasible with the proposed algorithm. The proposed PB-FDA can also avoid the influence of the zero-order image introduced by SLM in optical reconstructed images.

  8. From Tls Point Clouds to 3d Models of Trees: a Comparison of Existing Algorithms for 3d Tree Reconstruction

    NASA Astrophysics Data System (ADS)

    Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.

    2017-02-01

    3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS) data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.

  9. A uniform energy consumption algorithm for wireless sensor and actuator networks based on dynamic polling point selection.

    PubMed

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2013-12-19

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation.

  10. A Uniform Energy Consumption Algorithm for Wireless Sensor and Actuator Networks Based on Dynamic Polling Point Selection

    PubMed Central

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2014-01-01

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation. PMID:24451455

  11. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  12. Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds

    Treesearch

    Elias Ayrey; Shawn Fraver; John A. Kershaw; Laura S. Kenefic; Daniel Hayes; Aaron R. Weiskittel; Brian E. Roth

    2017-01-01

    As light detection and ranging (LiDAR) technology advances, it has become common for datasets to be acquired at a point density high enough to capture structural information from individual trees. To process these data, an automatic method of isolating individual trees from a LiDAR point cloud is required. Traditional methods for segmenting trees attempt to isolate...

  13. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  14. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  15. Optimal domain decomposition strategies

    NASA Technical Reports Server (NTRS)

    Yoon, Yonghyun; Soni, Bharat K.

    1995-01-01

    The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.

  16. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    SciTech Connect

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill; Chand, Kyle

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.

  17. Design of a Maximum Power Point Tracker with Simulation, Analysis, and Comparison of Algorithms

    DTIC Science & Technology

    2012-12-01

    BLANK xxvi CHAPTER 1: INTRODUCTION It is a warm summer day. You feel the sun warm your skin and rejuvenate your motiva- tion. The sun generates more... renewable , there has been an upsurge of interest in clean and renewable energy. While more than one option is available to fill that void, the most...solar array. When this algorithm is functioning correctly, it is said to be an MPPT . 1.2 Motivation Clean and renewable energy has greatly increased

  18. Algorithm-based arterial blood sampling recognition increasing safety in point-of-care diagnostics.

    PubMed

    Peter, Jörg; Klingert, Wilfried; Klingert, Kathrin; Thiel, Karolin; Wulff, Daniel; Königsrainer, Alfred; Rosenstiel, Wolfgang; Schenk, Martin

    2017-08-04

    To detect blood withdrawal for patients with arterial blood pressure monitoring to increase patient safety and provide better sample dating. Blood pressure information obtained from a patient monitor was fed as a real-time data stream to an experimental medical framework. This framework was connected to an analytical application which observes changes in systolic, diastolic and mean pressure to determine anomalies in the continuous data stream. Detection was based on an increased mean blood pressure caused by the closing of the withdrawal three-way tap and an absence of systolic and diastolic measurements during this manipulation. For evaluation of the proposed algorithm, measured data from animal studies in healthy pigs were used. Using this novel approach for processing real-time measurement data of arterial pressure monitoring, the exact time of blood withdrawal could be successfully detected retrospectively and in real-time. The algorithm was able to detect 422 of 434 (97%) blood withdrawals for blood gas analysis in the retrospective analysis of 7 study trials. Additionally, 64 sampling events for other procedures like laboratory and activated clotting time analyses were detected. The proposed algorithm achieved a sensitivity of 0.97, a precision of 0.96 and an F1 score of 0.97. Arterial blood pressure monitoring data can be used to perform an accurate identification of individual blood samplings in order to reduce sample mix-ups and thereby increase patient safety.

  19. Algorithm-based arterial blood sampling recognition increasing safety in point-of-care diagnostics

    PubMed Central

    Peter, Jörg; Klingert, Wilfried; Klingert, Kathrin; Thiel, Karolin; Wulff, Daniel; Königsrainer, Alfred; Rosenstiel, Wolfgang; Schenk, Martin

    2017-01-01

    AIM To detect blood withdrawal for patients with arterial blood pressure monitoring to increase patient safety and provide better sample dating. METHODS Blood pressure information obtained from a patient monitor was fed as a real-time data stream to an experimental medical framework. This framework was connected to an analytical application which observes changes in systolic, diastolic and mean pressure to determine anomalies in the continuous data stream. Detection was based on an increased mean blood pressure caused by the closing of the withdrawal three-way tap and an absence of systolic and diastolic measurements during this manipulation. For evaluation of the proposed algorithm, measured data from animal studies in healthy pigs were used. RESULTS Using this novel approach for processing real-time measurement data of arterial pressure monitoring, the exact time of blood withdrawal could be successfully detected retrospectively and in real-time. The algorithm was able to detect 422 of 434 (97%) blood withdrawals for blood gas analysis in the retrospective analysis of 7 study trials. Additionally, 64 sampling events for other procedures like laboratory and activated clotting time analyses were detected. The proposed algorithm achieved a sensitivity of 0.97, a precision of 0.96 and an F1 score of 0.97. CONCLUSION Arterial blood pressure monitoring data can be used to perform an accurate identification of individual blood samplings in order to reduce sample mix-ups and thereby increase patient safety. PMID:28828302

  20. Evaluation of glioblastomas and lymphomas with whole-brain CT perfusion: Comparison between a delay-invariant singular-value decomposition algorithm and a Patlak plot.

    PubMed

    Hiwatashi, Akio; Togao, Osamu; Yamashita, Koji; Kikuchi, Kazufumi; Yoshimoto, Koji; Mizoguchi, Masahiro; Suzuki, Satoshi O; Yoshiura, Takashi; Honda, Hiroshi

    2016-07-01

    Correction of contrast leakage is recommended when enhancing lesions during perfusion analysis. The purpose of this study was to assess the diagnostic performance of computed tomography perfusion (CTP) with a delay-invariant singular-value decomposition algorithm (SVD+) and a Patlak plot in differentiating glioblastomas from lymphomas. This prospective study included 17 adult patients (12 men and 5 women) with pathologically proven glioblastomas (n=10) and lymphomas (n=7). CTP data were analyzed using SVD+ and a Patlak plot. The relative tumor blood volume and flow compared to contralateral normal-appearing gray matter (rCBV and rCBF derived from SVD+, and rBV and rFlow derived from the Patlak plot) were used to differentiate between glioblastomas and lymphomas. The Mann-Whitney U test and receiver operating characteristic (ROC) analyses were used for statistical analysis. Glioblastomas showed significantly higher rFlow (3.05±0.49, mean±standard deviation) than lymphomas (1.56±0.53; P<0.05). There were no statistically significant differences between glioblastomas and lymphomas in rBV (2.52±1.57 vs. 1.03±0.51; P>0.05), rCBF (1.38±0.41 vs. 1.29±0.47; P>0.05), or rCBV (1.78±0.47 vs. 1.87±0.66; P>0.05). ROC analysis showed the best diagnostic performance with rFlow (Az=0.871), followed by rBV (Az=0.771), rCBF (Az=0.614), and rCBV (Az=0.529). CTP analysis with a Patlak plot was helpful in differentiating between glioblastomas and lymphomas, but CTP analysis with SVD+ was not. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  1. Algorithm XXX : functions to support the IEEE standard for binary floating-point arithmetic.

    SciTech Connect

    Cody, W. J.; Mathematics and Computer Science

    1993-12-01

    This paper describes C programs for the support functions copysign(x,y), logb(x), scalb(x,n), nextafter(x,y), finite(x), and isnan(x) recommended in the Appendix to the IEEE Standard for Binary Floating-Point Arithmetic. In the case of logb, the modified definition given in the later IEEE Standard for Radix-Independent Floating-Point Arithmetic is followed. These programs should run without modification on most systems conforming to the binary standard.

  2. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  3. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  4. Robust and Accurate Vision-Based Pose Estimation Algorithm Based on Four Coplanar Feature Points

    PubMed Central

    Zhang, Zimiao; Zhang, Shihai; Li, Qiu

    2016-01-01

    Vision-based pose estimation is an important application of machine vision. Currently, analytical and iterative methods are used to solve the object pose. The analytical solutions generally take less computation time. However, the analytical solutions are extremely susceptible to noise. The iterative solutions minimize the distance error between feature points based on 2D image pixel coordinates. However, the non-linear optimization needs a good initial estimate of the true solution, otherwise they are more time consuming than analytical solutions. Moreover, the image processing error grows rapidly with measurement range increase. This leads to pose estimation errors. All the reasons mentioned above will cause accuracy to decrease. To solve this problem, a novel pose estimation method based on four coplanar points is proposed. Firstly, the coordinates of feature points are determined according to the linear constraints formed by the four points. The initial coordinates of feature points acquired through the linear method are then optimized through an iterative method. Finally, the coordinate system of object motion is established and a method is introduced to solve the object pose. The growing image processing error causes pose estimation errors the measurement range increases. Through the coordinate system, the pose estimation errors could be decreased. The proposed method is compared with two other existing methods through experiments. Experimental results demonstrate that the proposed method works efficiently and stably. PMID:27999338

  5. Detection of uterine MMG contractions using a multiple change point estimator and the K-means cluster algorithm.

    PubMed

    La Rosa, Patricio S; Nehorai, Arye; Eswaran, Hari; Lowery, Curtis L; Preissl, Hubert

    2008-02-01

    We propose a single channel two-stage time-segment discriminator of uterine magnetomyogram (MMG) contractions during pregnancy. We assume that the preprocessed signals are piecewise stationary having distribution in a common family with a fixed number of parameters. Therefore, at the first stage, we propose a model-based segmentation procedure, which detects multiple change-points in the parameters of a piecewise constant time-varying autoregressive model using a robust formulation of the Schwarz information criterion (SIC) and a binary search approach. In particular, we propose a test statistic that depends on the SIC, derive its asymptotic distribution, and obtain closed-form optimal detection thresholds in the sense of the Neyman-Pearson criterion; therefore, we control the probability of false alarm and maximize the probability of change-point detection in each stage of the binary search algorithm. We compute and evaluate the relative energy variation [root mean squares (RMS)] and the dominant frequency component [first order zero crossing (FOZC)] in discriminating between time segments with and without contractions. The former consistently detects a time segment with contractions. Thus, at the second stage, we apply a nonsupervised K-means cluster algorithm to classify the detected time segments using the RMS values. We apply our detection algorithm to real MMG records obtained from ten patients admitted to the hospital for contractions with gestational ages between 31 and 40 weeks. We evaluate the performance of our detection algorithm in computing the detection and false alarm rate, respectively, using as a reference the patients' feedback. We also analyze the fusion of the decision signals from all the sensors as in the parallel distributed detection approach.

  6. Parallelization of PANDA discrete ordinates code using spatial decomposition

    SciTech Connect

    Humbert, P.

    2006-07-01

    We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7 benchmark of the OECD/NEA. (authors)

  7. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  8. An Evaluation of Vegetation Filtering Algorithms for Improved Snow Depth Estimation from Point Cloud Observations in Mountain Environments

    NASA Astrophysics Data System (ADS)

    Vanderjagt, B. J.; Durand, M. T.; Lucieer, A.; Wallace, L.

    2014-12-01

    High-resolution snow depth measurements are possible through bare-earth (BE) differencing of point cloud datasets obtained using LiDAR and photogrammetry during snow-free and snow-covered conditions. These accuracy and resolution of these snow depth measurements are desirable in mountain environments in which ground measurements are dangerous and difficult to perform, and other remote sensing techniques are often characterized by large errors and uncertainties due variable topography, vegetation, and snow properties. BE ground filtering algorithms make different assumptions about ground characteristics to differentiate between ground and non-ground features. Because of this, ground surfaces may have unique characteristics that confound ground filters depending on the location and terrain conditions. These include low-lying shrubs (<1 m), areas with high topographic relief, and areas with high surface roughness. We evaluate several different algorithms, including lowest point, kriging, and more sophisticated splining techniques such as the Multiscale Curvature Classification (MCC) to resolve snow depths. Understanding how these factors affect BE surface models and thus snow depth measurements is a valuable contribution towards improving the processing protocols associated with these relatively new snow observation techniques. We test the different BE filtering algorithms using LiDAR and photogrammetric measurements taken from an Unmanned Aerial Vehicle (UAV) in Southwest Tasmania, Australia during the winter and spring of 2013. The study area is characterized by sloping, uneven terrain, and different types of vegetation including eucalyptus and conifer trees, as well as dense shrubs varying in heights from 0.3-1.5 meters. Initial snow depth measurements using the unfiltered point cloud measurements are characterized by large errors (~20-90 cm) due to the dense vegetation. Using filtering techniques instead of raw differencing improves the estimation of snow depth in

  9. Convergence Behaviour of Some Iteration Procedures for Exterior Point Method of Centres Algorithms,

    DTIC Science & Technology

    1979-02-01

    in Reference 4, while Staha and Himmelblau 9 reported very favourably on their application of Newton’s method to the exterior point method of centres...34. .4cfta Poll- technica Scandinavica, Trondheim, 13 (1966). 9. Staha, R. L., and Himmelblau . D. M., "Evaludtion of Constrained Nonlinear Programming

  10. Decomposing Nekrasov decomposition

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Zenkevich, Y.

    2016-02-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair "interaction" is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  11. The generalized triangular decomposition

    NASA Astrophysics Data System (ADS)

    Jiang, Yi; Hager, William W.; Li, Jian

    2008-06-01

    Given a complex matrix mathbf{H} , we consider the decomposition mathbf{H} = mathbf{QRP}^* , where mathbf{R} is upper triangular and mathbf{Q} and mathbf{P} have orthonormal columns. Special instances of this decomposition include the singular value decomposition (SVD) and the Schur decomposition where mathbf{R} is an upper triangular matrix with the eigenvalues of mathbf{H} on the diagonal. We show that any diagonal for mathbf{R} can be achieved that satisfies Weyl's multiplicative majorization conditions: prod_{iD1}^k \\vert r_{i}\\vert le prod_{iD1}^k sigma_i, ; ; 1 le k < K, quad prod_{iD1}^K \\vert r_{i}\\vert = prod_{iD1}^K sigma_i, where K is the rank of mathbf{H} , sigma_i is the i -th largest singular value of mathbf{H} , and r_{i} is the i -th largest (in magnitude) diagonal element of mathbf{R} . Given a vector mathbf{r} which satisfies Weyl's conditions, we call the decomposition mathbf{H} = mathbf{QRP}^* , where mathbf{R} is upper triangular with prescribed diagonal mathbf{r} , the generalized triangular decomposition (GTD). A direct (nonrecursive) algorithm is developed for computing the GTD. This algorithm starts with the SVD and applies a series of permutations and Givens rotations to obtain the GTD. The numerical stability of the GTD update step is established. The GTD can be used to optimize the power utilization of a communication channel, while taking into account quality of service requirements for subchannels. Another application of the GTD is to inverse eigenvalue problems where the goal is to construct matrices with prescribed eigenvalues and singular values.

  12. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    NASA Astrophysics Data System (ADS)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  13. An Automatic Algorithm for Minimizing Anomalies and Discrepancies in Point Clouds Acquired by Laser Scanning Technique

    NASA Astrophysics Data System (ADS)

    Bordin, Fabiane; Gonzaga, Luiz, Jr.; Galhardo Muller, Fabricio; Veronez, Mauricio Roberto; Scaioni, Marco

    2016-06-01

    Laser scanning technique from airborne and land platforms has been largely used for collecting 3D data in large volumes in the field of geosciences. Furthermore, the laser pulse intensity has been widely exploited to analyze and classify rocks and biomass, and for carbon storage estimation. In general, a laser beam is emitted, collides with targets and only a percentage of emitted beam returns according to intrinsic properties of each target. Also, due interferences and partial collisions, the laser return intensity can be incorrect, introducing serious errors in classification and/or estimation processes. To address this problem and avoid misclassification and estimation errors, we have proposed a new algorithm to correct return intensity for laser scanning sensors. Different case studies have been used to evaluate and validated proposed approach.

  14. A Unique Computational Algorithm to Simulate Probabilistic Multi-Factor Interaction Model Complex Material Point Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2010-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points--the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  15. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGES

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; ...

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. As a result, this is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  16. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  17. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates.

  18. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Experimental Validation.

    PubMed

    Hyodo, Tomoko; Hori, Masatoshi; Lamb, Peter; Sasaki, Kosuke; Wakayama, Tetsuya; Chiba, Yasutaka; Mochizuki, Teruhito; Murakami, Takamichi

    2017-02-01

    Purpose To assess the ability of fast-kilovolt-peak switching dual-energy computed tomography (CT) by using the multimaterial decomposition (MMD) algorithm to quantify liver fat. Materials and Methods Fifteen syringes that contained various proportions of swine liver obtained from an abattoir, lard in food products, and iron (saccharated ferric oxide) were prepared. Approval of this study by the animal care and use committee was not required. Solid cylindrical phantoms that consisted of a polyurethane epoxy resin 20 and 30 cm in diameter that held the syringes were scanned with dual- and single-energy 64-section multidetector CT. CT attenuation on single-energy CT images (in Hounsfield units) and MMD-derived fat volume fraction (FVF; dual-energy CT FVF) were obtained for each syringe, as were magnetic resonance (MR) spectroscopy measurements by using a 1.5-T imager (fat fraction [FF] of MR spectroscopy). Reference values of FVF (FVFref) were determined by using the Soxhlet method. Iron concentrations were determined by inductively coupled plasma optical emission spectroscopy and divided into three ranges (0 mg per 100 g, 48.1-55.9 mg per 100 g, and 92.6-103.0 mg per 100 g). Statistical analysis included Spearman rank correlation and analysis of covariance. Results Both dual-energy CT FVF (ρ = 0.97; P < .001) and CT attenuation on single-energy CT images (ρ = -0.97; P < .001) correlated significantly with FVFref for phantoms without iron. Phantom size had a significant effect on dual-energy CT FVF after controlling for FVFref (P < .001). The regression slopes for CT attenuation on single-energy CT images in 20- and 30-cm-diameter phantoms differed significantly (P = .015). In sections with higher iron concentrations, the linear coefficients of dual-energy CT FVF decreased and those of MR spectroscopy FF increased (P < .001). Conclusion Dual-energy CT FVF allows for direct quantification of fat content in units of volume percent. Dual-energy CT FVF was larger in 30

  19. ParaStream: A parallel streaming Delaunay triangulation algorithm for LiDAR points on multicore architectures

    NASA Astrophysics Data System (ADS)

    Wu, Huayi; Guan, Xuefeng; Gong, Jianya

    2011-09-01

    This paper presents a robust parallel Delaunay triangulation algorithm called ParaStream for processing billions of points from nonoverlapped block LiDAR files. The algorithm targets ubiquitous multicore architectures. ParaStream integrates streaming computation with a traditional divide-and-conquer scheme, in which additional erase steps are implemented to reduce the runtime memory footprint. Furthermore, a kd-tree-based dynamic schedule strategy is also proposed to distribute triangulation and merging work onto the processor cores for improved load balance. ParaStream exploits most of the computing power of multicore platforms through parallel computing, demonstrating qualities of high data throughput as well as a low memory footprint. Experiments on a 2-Way-Quad-Core Intel Xeon platform show that ParaStream can triangulate approximately one billion LiDAR points (16.4 GB) in about 16 min with only 600 MB physical memory. The total speedup (including I/O time) is about 6.62 with 8 concurrent threads.

  20. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  1. Hypereosinophilic Syndrome and Clonal Eosinophilia: Point-of-Care Diagnostic Algorithm and Treatment Update

    PubMed Central

    Tefferi, Ayalew; Gotlib, Jason; Pardanani, Animesh

    2010-01-01

    Acquired eosinophilia is operationally categorized into secondary, clonal, and idiopathic types. Causes of secondary eosinophilia include parasite infections, allergic or vasculitis conditions, drugs, and lymphoma. Clonal eosinophilia is distinguished from idiopathic eosinophilia by the presence of histologic, cytogenetic, or molecular evidence of an underlying myeloid malignancy. The World Health Organization classification system for hematologic malignancies recognizes 2 distinct subcategories of clonal eosinophilia: chronic eosinophilic leukemia, not otherwise specified and myeloid/lymphoid neoplasms with eosinophilia and mutations involving platelet-derived growth factor receptor α/β or fibroblast growth factor receptor 1. Clonal eosinophilia might also accompany other World Health Organization—defined myeloid malignancies, including chronic myelogenous leukemia, myelodysplastic syndromes, chronic myelomonocytic leukemia, and systemic mastocytosis. Hypereosinophilic syndrome, a subcategory of idiopathic eosinophilia, is defined by the presence of a peripheral blood eosinophil count of 1.5 × 109/L or greater for at least 6 months (a shorter duration is acceptable in the presence of symptoms that require eosinophil-lowering therapy), exclusion of both secondary and clonal eosinophilia, evidence of organ involvement, and absence of phenotypically abnormal and/or clonal T lymphocytes. The presence of the latter defines lymphocytic variant hyper eosinophilia, which is best classified under secondary eosinophilia. In the current review, we provide a simplified algorithm for distinguishing the various causes of clonal and idiopathic eosinophilia and discuss current therapy, including new drugs (imatinib mesylate, alemtuzumab, and mepolizumab). PMID:20053713

  2. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  3. A novel multi-aperture based sun sensor based on a fast multi-point MEANSHIFT (FMMS) algorithm.

    PubMed

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels.

  4. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data.

    PubMed

    Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N

    2016-01-01

    To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001), but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001). Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001). The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.

  5. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  6. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO), launched in 2010, is a NASA-designed spacecraft built to study the Sun. SDO has tight pointing requirements and instruments that are sensitive to spacecraft jitter. Two High Gain Antennas (HGAs) are used to continuously send science data to a dedicated ground station. Preflight analysis showed that jitter resulting from motion of the HGAs was a cause for concern. Three jitter mitigation techniques were developed and implemented to overcome effects of jitter from different sources. These mitigation techniques include: the random step delay, stagger stepping, and the No Step Request (NSR). During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft, in which various sources of jitter were examined to determine their level of effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. The jitter levels were compared with the gimbal jitter allocations for each instrument. The decision was made to consider implementing two of the jitter mitigating techniques on board the spacecraft: stagger stepping and the NSR. Flight data with and without jitter mitigation enabled was examined, and it is shown in this paper that HGA tracking is not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. The HGA-induced jitter on the instruments is well within the jitter requirement when the stagger step and NSR mitigation options are enabled.

  7. DHARMA - Discriminant hyperplane abstracting residuals minimization algorithm for separating clusters with fuzzy boundaries. [data points pattern recognition technique

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    Learning of discriminant hyperplanes in imperfectly supervised or unsupervised training sample sets with unreliably labeled samples along the fuzzy joint boundaries between sample clusters is discussed, with the discriminant hyperplane designed to be a least-squares fit to the unreliably labeled data points. (Samples along the fuzzy boundary jump back and forth from one cluster to the other in recursive cluster stabilization and are considered unreliably labeled.) Minimization of the distances of these unreliably labeled samples from the hyperplanes does not sacrifice the ability to discriminate between classes represented by reliably labeled subsets of samples. An equivalent unconstrained linear inequality problem is formulated and algorithms for its solution are indicated. Landsat earth sensing data were used in confirming the validity and computational feasibility of the approach, which should be useful in deriving discriminant hyperplanes separating clusters with fuzzy boundaries, given supervised training sample sets with unreliably labeled boundary samples.

  8. Planetary Crater Detection and Registration Using Marked Point Processes, Multiple Birth and Death Algorithms, and Region-Based Analysis

    NASA Technical Reports Server (NTRS)

    Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.

    2017-01-01

    Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.

  9. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p<0.01); and the correlation coefficient of tree crown volume between V(VC) derived from new method and V(C) by the formula of a regular body is 0.960 (p<0.001). The results also show that the average of V(C) is smaller than that of V(VC) at the rate of 8.03%, and the average of A4 is larger than that of A(V) at the rate of 25.5%. Assumed Av and V(VC) as ture values, the deviations of the new method could be attributed to irregularity of the crowns' silhouettes. Different morphological characteristics of tree crown led to measurement error in forest simple plot survey. Based on the results, the paper proposes that: (1) the use of eight-point or sixteen-point projection with

  10. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  11. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  12. A novel Bayesian change-point algorithm for genome-wide analysis of diverse ChIPseq data types.

    PubMed

    Xing, Haipeng; Liao, Willey; Mo, Yifan; Zhang, Michael Q

    2012-12-10

    ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein(1). For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment(2). Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics(3-5) to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)(6-8). We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs(9), which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how

  13. Design and FPGA Implementation of a Universal Chaotic Signal Generator Based on the Verilog HDL Fixed-Point Algorithm and State Machine Control

    NASA Astrophysics Data System (ADS)

    Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng

    In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.

  14. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    SciTech Connect

    2012-05-31

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  15. Hierarchy of stable Morse decompositions.

    PubMed

    Szymczak, Andrzej

    2013-05-01

    We introduce an algorithm for construction of the Morse hierarchy, i.e., a hierarchy of Morse decompositions of a piecewise constant vector field on a surface driven by stability of the Morse sets with respect to perturbation of the vector field. Our approach builds upon earlier work on stable Morse decompositions, which can be used to obtain Morse sets of user-prescribed stability. More stable Morse decompositions are coarser, i.e., they consist of larger Morse sets. In this work, we develop an algorithm for tracking the growth of Morse sets and topological events (mergers) that they undergo as their stability is gradually increased. The resulting Morse hierarchy can be explored interactively. We provide examples demonstrating that it can provide a useful coarse overview of the vector field topology.

  16. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-05

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used.

  17. Bi2(C2O4)3·7H2O and Bi(C2O4)OH Oxalates Thermal Decomposition Revisited. Formation of Nanoparticles with a Lower Melting Point than Bulk Bismuth.

    PubMed

    Roumanille, Pierre; Baco-Carles, Valérie; Bonningue, Corine; Gougeon, Michel; Duployer, Benjamin; Monfraix, Philippe; Le Trong, Hoa; Tailhades, Philippe

    2017-08-21

    Two bismuth oxalates, namely, Bi2(C2O4)3·7H2O and Bi(C2O4)OH, were studied in terms of synthesis, structural characterization, particle morphology, and thermal behavior under several atmospheres. The oxalate powders were produced by chemical precipitation from bismuth nitrate and oxalic acid solutions under controlled pH, then characterized by X-ray diffraction (XRD), temperature-dependent XRD, IR spectroscopy, scanning electron microscopy, and thermogravimetric differential thermal analyses. New results on the thermal decomposition of bismuth oxalates under inert or reducing atmospheres are provided. On heating in nitrogen, both studied compounds decompose into small bismuth particles. Thermal properties of the metallic products were investigated. The Bi(C2O4)OH decomposition leads to a Bi-Bi2O3 metal-oxide composite product in which bismuth is confined in a nanometric size, due to surface oxidation. The melting point of such bismuth particles is strongly related to their crystallite size. The nanometric bismuth melting has thus been evidenced ∼40 °C lower than for bulk bismuth. These results should contribute to the development of the oxalate precursor route for low-temperature soldering applications.

  18. Bridging Proper Orthogonal Decomposition methods and augmented Newton-Krylov algorithms: an adaptive model order reduction for highly nonlinear mechanical problems

    PubMed Central

    Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.

    2013-01-01

    This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688

  19. Simultaneous determination of free amino acid content in tea infusions by using high-performance liquid chromatography with fluorescence detection coupled with alternating penalty trilinear decomposition algorithm.

    PubMed

    Tan, Fuyuan; Tan, Chao; Zhao, Aiping; Li, Menglong

    2011-10-26

    In this paper, a novel application of alternating penalty trilinear decomposition (APTLD) for high-performance liquid chromatography with fluorescence detection (HPLC-FLD) has been developed to simultaneously determine the contents of free amino acids in tea. Although the spectra of amino acid derivatives were similar and a large number of water-soluble compounds are coextracted, APTLD could predict the accurate concentrations together with reasonable resolution of chromatographic and spectral profiles for the amino acids of interest owing to its "second-order advantage". An additional advantage of the proposed method is lower cost than traditional methods. The results indicate that it is an attractive alternative strategy for the routine resolution and quantification of amino acids in the presence of unknown interferences or when complete separation is not easily achieved.

  20. Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems

    SciTech Connect

    O'Leary, Dianne P.; Tits, Andre

    2014-04-03

    Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.

  1. New detection algorithm for dim point moving target in IR-image sequence based on an image frames transformation

    NASA Astrophysics Data System (ADS)

    Mohamed, M. A.; Li, Hongzuo

    2013-09-01

    In this paper we follow the concept of the track before detect (TBD) category in order to perform a simple, fast and adaptive detection and tracking processes of dim pixel size moving targets in IR images sequence. We present two new algorithms based on an image frames transformation, the first algorithm is a recursive algorithm to measure the image background Baseline which help in assigning an adaptive threshold, while the second is an adaptive recursive statistical spatio-temporal algorithm for detecting and tracking the target. The results of applying the proposed algorithms on a set of frames having a simple single pixel target performing a linear motion shows a high efficiency and validity in the detecting of the motion, and the measurement of the background baseline.

  2. A decomposition approach to CPM

    NASA Astrophysics Data System (ADS)

    Rimoldi, Bixio E.

    1988-03-01

    It is shown that any continuous-phase-modulation (CPM) system can be decomposed into a continuous-phase encoder and a memoryless modulator in such a way that the former is a linear (modulo some integer P) time-invariant sequential circuit and the latter is also time invariant. This decomposition is exploited to obtain alternative realizations of the continuous-phase encoder (and hence of CPM) and also to obtain alternative forms of the optimum decoding algorithm. When P is a prime p so that the encoder is linear over the finite field GF(p), it is shown that cascading it with an outside convolutional encoder is equivalent to a single convolutional encoder. It is pointed out that the cascade of the modulator, the waveform channel (which it is assumed is characterized by additive white Gaussian noise), and the demodulator that operates over one symbol interval yield a discrete memoryless channel that can be studied without the distractions introduced by continuous-phase encoding.

  3. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration

    PubMed Central

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods. PMID:26881433

  4. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    PubMed

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  5. TRIANGLE-SHAPED DC CORONA DISCHARGE DEVICE FOR MOLECULAR DECOMPOSITION

    EPA Science Inventory

    The paper discusses the evaluation of electrostatic DC corona discharge devices for the application of molecular decomposition. A point-to-plane geometry corona device with a rectangular cross section demonstrated low decomposition efficiencies in earlier experimental work. The n...

  6. TRIANGLE-SHAPED DC CORONA DISCHARGE DEVICE FOR MOLECULAR DECOMPOSITION

    EPA Science Inventory

    The paper discusses the evaluation of electrostatic DC corona discharge devices for the application of molecular decomposition. A point-to-plane geometry corona device with a rectangular cross section demonstrated low decomposition efficiencies in earlier experimental work. The n...

  7. An algorithm that administers adaptive speech-in-noise testing to a specified reliability at selectable points on the psychometric function.

    PubMed

    Keidser, Gitte; Dillon, Harvey; Mejia, Jorge; Nguyen, Cong-Van

    2013-11-01

    To introduce and verify an algorithm designed to administer adaptive speech-in- noise testing to a specified reliability at selectable points on the psychometric function. Speech-in-noise performances were measured using BKB sentences presented in diffuse babble-noise, using morphemic scoring. Target of the algorithm was a test-retest standard deviation of 1.13 dB within the presentation of 32 sentences. Normal-hearing participants completed repeated measures using manual administration targeting 50% correct, and the automated procedure targeting 25%, 50%, and 75% correct. Aided hearing-impaired participants completed testing with the automated procedure targeting 25%, 50%, and 75% correct, repeating measurements at the 50% point three times. Twelve normal-hearing and 63 hearing-impaired people who had English as first language. Relative to the manual procedure, the algorithm produced the same speech reception threshold in noise (p = 0.96) and lower test-retest reliability on normal-hearing listeners. Both groups obtained significantly different results at the three target points (p < 0.04) with observed reliability close to expected. Target accuracy was not reached within 32 sentences for 18% of measurements on hearing-impaired participants. The reliability of the algorithm was verified. A second test is recommended if the target variability is not reached during the first measurement.

  8. Evaluation of a novel transfusion algorithm employing point-of-care coagulation assays in cardiac surgery: a retrospective cohort study with interrupted time-series analysis.

    PubMed

    Karkouti, Keyvan; McCluskey, Stuart A; Callum, Jeannie; Freedman, John; Selby, Rita; Timoumi, Tarik; Roy, Debashis; Rao, Vivek

    2015-03-01

    Cardiac surgery requiring the use of cardiopulmonary bypass is frequently complicated by coagulopathic bleeding that, largely due to the shortcomings of conventional coagulation tests, is difficult to manage. This study evaluated a novel transfusion algorithm that uses point-of-care coagulation testing. Consecutive patients who underwent cardiac surgery with bypass at one hospital before (January 1, 2012 to January 6, 2013) and after (January 7, 2013 to December 13, 2013) institution of an algorithm that used the results of point-of-care testing (ROTEM; Tem International GmBH, Munich, Germany; Plateletworks; Helena Laboratories, Beaumont, TX) during bypass to guide management of coagulopathy were included. Pre- and postalgorithm outcomes were compared using interrupted time-series analysis to control for secular time trends and other confounders. Pre- and postalgorithm groups included 1,311 and 1,170 patients, respectively. Transfusion rates for all blood products (except for cryoprecipitate, which did not change) were decreased after algorithm institution. After controlling for secular pre- and postalgorithm time trends and potential confounders, the posttransfusion odds ratios (95% CIs) for erythrocytes, platelets, and plasma were 0.50 (0.32 to 0.77), 0.22 (0.13 to 0.37), and 0.20 (0.12 to 0.34), respectively. There were no indications that the algorithm worsened any of the measured processes of care or outcomes. Institution of a transfusion algorithm based on point-of-care testing was associated with reduced transfusions. This suggests that the algorithm could improve the management of the many patients who develop coagulopathic bleeding after cardiac surgery. The generalizability of the findings needs to be confirmed.

  9. Revisiting the layout decomposition problem for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Kahng, Andrew B.; Park, Chul-Hong; Xu, Xu; Yao, Hailong

    2008-10-01

    In double patterning lithography (DPL) layout decomposition for 45nm and below process nodes, two features must be assigned opposite colors (corresponding to different exposures) if their spacing is less than the minimum coloring spacing.5, 11, 14 However, there exist pattern configurations for which pattern features separated by less than the minimum coloring spacing cannot be assigned different colors. In such cases, DPL requires that a layout feature be split into two parts. We address this problem using a layout decomposition algorithm that incorporates integer linear programming (ILP), phase conflict detection (PCD), and node-deletion bipartization (NDB) methods. We evaluate our approach on both real-world and artificially generated testcases in 45nm technology. Experimental results show that our proposed layout decomposition method effectively decomposes given layouts to satisfy the key goals of minimized line-ends and maximized overlap margin. There are no design rule violations in the final decomposed layout. While we have previously reported other facets of our research on DPL pattern decomposition,6 the present paper differs from that work in the following key respects: (1) instead of detecting conflict cycles and splitting nodes in conflict cycles to achieve graph bipartization,6 we split all nodes of the conflict graph at all feasible dividing points and then formulate a problem of bipartization by ILP, PCD8 and NDB9 methods; and (2) instead of reporting unresolvable conflict cycles, we report the number of deleted conflict edges to more accurately capture the needed design changes in the experimental results.

  10. Fixed-point single-precision estimation. [Kalman filtering for NASA Standard Spacecraft Computer orbit determination algorithm

    NASA Technical Reports Server (NTRS)

    Thompson, E. H.; Farrell, J. L.

    1976-01-01

    Monte Carlo simulation of autonomous orbit determination has validated the use of an 18-bit NASA Standard Spacecraft Computer (NSSC) for the extended Kalman filter. Dimensionally consistent scales are chosen for all variables in the algorithm, such that nearly all of the onboard computation can be performed in single precision without matrix square root formulations. Allowable simplifications in algorithm implementation and practical means of ensuring convergence are verified for accuracies of a few km provided by star/vertical observations

  11. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore

  12. Convergence Analysis of a Domain Decomposition Paradigm

    SciTech Connect

    Bank, R E; Vassilevski, P S

    2006-06-12

    We describe a domain decomposition algorithm for use in several variants of the parallel adaptive meshing paradigm of Bank and Holst. This algorithm has low communication, makes extensive use of existing sequential solvers, and exploits in several important ways data generated as part of the adaptive meshing paradigm. We show that for an idealized version of the algorithm, the rate of convergence is independent of both the global problem size N and the number of subdomains p used in the domain decomposition partition. Numerical examples illustrate the effectiveness of the procedure.

  13. A double-loop structure in the adaptive generalized predictive control algorithm for control of robot end-point contact force.

    PubMed

    Wen, Shuhuan; Zhu, Jinghai; Li, Xiaoli; Chen, Shengyong

    2014-09-01

    Robot force control is an essential issue in robotic intelligence. There is much high uncertainty when robot end-effector contacts with the environment. Because of the environment stiffness effects on the system of the robot end-effector contact with environment, the adaptive generalized predictive control algorithm based on quantitative feedback theory is designed for robot end-point contact force system. The controller of the internal loop is designed on the foundation of QFT to control the uncertainty of the system. An adaptive GPC algorithm is used to design external loop controller to improve the performance and the robustness of the system. Two closed loops used in the design approach realize the system׳s performance and improve the robustness. The simulation results show that the algorithm of the robot end-effector contacting force control system is effective.

  14. The Complexity of Standing Postural Control in Older Adults: A Modified Detrended Fluctuation Analysis Based upon the Empirical Mode Decomposition Algorithm

    PubMed Central

    Liu, Dongdong; Hu, Kun; Zhang, Jue; Fang, Jing

    2013-01-01

    Human aging into senescence diminishes the capacity of the postural control system to adapt to the stressors of everyday life. Diminished adaptive capacity may be reflected by a loss of the fractal-like, multiscale complexity within the dynamics of standing postural sway (i.e., center-of-pressure, COP). We therefore studied the relationship between COP complexity and adaptive capacity in 22 older and 22 younger healthy adults. COP magnitude dynamics were assessed from raw data during quiet standing with eyes open and closed, and complexity was quantified with a new technique termed empirical mode decomposition embedded detrended fluctuation analysis (EMD-DFA). Adaptive capacity of the postural control system was assessed with the sharpened Romberg test. As compared to traditional DFA, EMD-DFA more accurately identified trends in COP data with intrinsic scales and produced short and long-term scaling exponents (i.e., αShort, αLong) with greater reliability. The fractal-like properties of COP fluctuations were time-scale dependent and highly complex (i.e., αShort values were close to one) over relatively short time scales. As compared to younger adults, older adults demonstrated lower short-term COP complexity (i.e., greater αShort values) in both visual conditions (p>0.001). Closing the eyes decreased short-term COP complexity, yet this decrease was greater in older compared to younger adults (p<0.001). In older adults, those with higher short-term COP complexity exhibited better adaptive capacity as quantified by Romberg test performance (r2 = 0.38, p<0.001). These results indicate that an age-related loss of COP complexity of magnitude series may reflect a clinically important reduction in postural control system functionality as a new biomarker. PMID:23650518

  15. A new damping factor algorithm based on line search of the local minimum point for inverse approach

    NASA Astrophysics Data System (ADS)

    Zhang, Yaqi; Liu, Weijie; Lu, Fang; Zhang, Xiangkui; Hu, Ping

    2013-05-01

    The influence of damping factor on the convergence and computational efficiency of the inverse approach was studied through a series of practical examples. A new selection algorithm of the damping (relaxation) factor which takes into account of both robustness and calculation efficiency is proposed, then the computer program is implemented and tested on Siemens PLM NX | One-Step. The result is compared with the traditional Armijo rule through six examples such as U-beam, square box and cylindrical cup et al, confirming the effectiveness of proposed algorithm.

  16. Critical analysis of nitramine decomposition data: Activation energies and frequency factors for HMX and RDX decomposition

    NASA Technical Reports Server (NTRS)

    Schroeder, M. A.

    1980-01-01

    A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.

  17. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  18. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  19. Tiling Models for Spatial Decomposition in AMTRAN

    SciTech Connect

    Compton, J C; Clouse, C J

    2005-05-27

    Effective spatial domain decomposition for discrete ordinate (S{sub n}) neutron transport calculations has been critical for exploiting massively parallel architectures typified by the ASCI White computer at Lawrence Livermore National Laboratory. A combination of geometrical and computational constraints has posed a unique challenge as problems have been scaled up to several thousand processors. Carefully scripted decomposition and corresponding execution algorithms have been developed to handle a range of geometrical and hardware configurations.

  20. Domain decomposition for the SPN solver MINOS

    SciTech Connect

    Jamelot, Erell; Baudron, Anne-Marie; Lautard, Jean-Jacques

    2012-07-01

    In this article we present a domain decomposition method for the mixed SPN equations, discretized with Raviart-Thomas-Nedelec finite elements. This domain decomposition is based on the iterative Schwarz algorithm with Robin interface conditions to handle communications. After having described this method, we give details on how to optimize the convergence. Finally, we give some numerical results computed in a realistic 3D domain. The computations are done with the MINOS solver of the APOLLO3 (R) code. (authors)

  1. Domain Decomposition for the SPN Solver MINOS

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Baudron, Anne-Marie; Lautard, Jean-Jacques

    2012-12-01

    In this article we present a domain decomposition method for the mixed SPN equations, discretized with Raviart-Thomas-Nédélec finite elements. This domain decomposition is based on the iterative Schwarz algorithm with Robin interface conditions to handle communications. After having described this method, we give details on how to optimize the convergence. Finally, we give some numerical results computed in a realistic 3D domain. The computations are done with the MINOS solver of the APOLLO3® code.

  2. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  3. FAST TRACK PAPER: Receiver function decomposition of OBC data: theory

    NASA Astrophysics Data System (ADS)

    Edme, Pascal; Singh, Satish C.

    2009-06-01

    This paper deals with theoretical aspects of wavefield decomposition of Ocean Bottom Cable (OBC) data in the τ-p domain, considering a horizontally layered medium. We present both the acoustic decomposition and elastic decomposition procedures in a simple and compatible way. Acoustic decomposition aims at estimating the primary upgoing P wavefield just above the ocean-bottom, whereas elastic decomposition aims at estimating the primary upgoing P and S wavefields just below the ocean-bottom. Specific issues due to the interference phenomena at the receiver level are considered. Our motivation is to introduce the two-step decomposition scheme called `receiver function' (RF) decomposition that aims at determining the primary upgoing P and S wavefields (RFP and RFS, free of any water layer multiples). We show that elastic decomposition is a necessary step (acting as pre-conditioning) before applying the multiple removal step by predictive deconvolution. We show the applicability of our algorithm on a synthetic data example.

  4. An algorithm for approximating the L * invariant coordinate from the real-time tracing of one magnetic field line between mirror points

    NASA Astrophysics Data System (ADS)

    Lejosne, Solène

    2014-08-01

    The L * invariant coordinate depends on the global electromagnetic field topology at a given instance, and the standard method for its determination requires a computationally expensive drift contour tracing. This fact makes L * a cumbersome parameter to handle. In this paper, we provide new insights on the L * parameter, and we introduce an algorithm for an L * approximation that only requires the real-time tracing of one magnetic field line between mirrors points. This approximation is based on the description of the variation of the magnetic field mirror intensity after an adiabatic dipolarization, i.e., after the nondipolar components of a magnetic field have been turned off with a characteristic time very long in comparison with the particles' drift periods. The corresponding magnetic field topological variations are deduced, assuming that the field line foot points remain rooted in the Earth's surface, and the drift average operator is replaced with a computationally cheaper circular average operator. The algorithm results in a relative difference of a maximum of 12% between the approximate L * and the output obtained using the International Radiation Belt Environment Modeling library, in the case of the Tsyganenko 89 model for the external magnetic field (T89). This margin of error is similar to the margin of error due to small deviations between different magnetic field models at geostationary orbit. This approximate L * algorithm represents therefore a reasonable compromise between computational speed and accuracy of particular interest for real-time space weather forecast purposes.

  5. [Prenatal risk calculation: comparison between Fast Screen pre I plus software and ViewPoint software. Evaluation of the risk calculation algorithms].

    PubMed

    Morin, Jean-François; Botton, Eléonore; Jacquemard, François; Richard-Gireme, Anouk

    2013-01-01

    The Fetal medicine foundation (FMF) has developed a new algorithm called Prenatal Risk Calculation (PRC) to evaluate Down syndrome screening based on free hCGβ, PAPP-A and nuchal translucency. The peculiarity of this algorithm is to use the degree of extremeness (DoE) instead of the multiple of the median (MoM). The biologists measuring maternal seric markers on Kryptor™ machines (Thermo Fisher Scientific) use Fast Screen pre I plus software for the prenatal risk calculation. This software integrates the PRC algorithm. Our study evaluates the data of 2.092 patient files of which 19 show a fœtal abnormality. These files have been first evaluated with the ViewPoint software based on MoM. The link between DoE and MoM has been analyzed and the different calculated risks compared. The study shows that Fast Screen pre I plus software gives the same risk results as ViewPoint software, but yields significantly fewer false positive results.

  6. Adaptive neuro-fuzzy inference system multi-objective optimization using the genetic algorithm/singular value decomposition method for modelling the discharge coefficient in rectangular sharp-crested side weirs

    NASA Astrophysics Data System (ADS)

    Khoshbin, Fatemeh; Bonakdari, Hossein; Hamed Ashraf Talesh, Seyed; Ebtehaj, Isa; Zaji, Amir Hossein; Azimi, Hamed

    2016-06-01

    In the present article, the adaptive neuro-fuzzy inference system (ANFIS) is employed to model the discharge coefficient in rectangular sharp-crested side weirs. The genetic algorithm (GA) is used for the optimum selection of membership functions, while the singular value decomposition (SVD) method helps in computing the linear parameters of the ANFIS results section (GA/SVD-ANFIS). The effect of each dimensionless parameter on discharge coefficient prediction is examined in five different models to conduct sensitivity analysis by applying the above-mentioned dimensionless parameters. Two different sets of experimental data are utilized to examine the models and obtain the best model. The study results indicate that the model designed through GA/SVD-ANFIS predicts the discharge coefficient with a good level of accuracy (mean absolute percentage error = 3.362 and root mean square error = 0.027). Moreover, comparing this method with existing equations and the multi-layer perceptron-artificial neural network (MLP-ANN) indicates that the GA/SVD-ANFIS method has superior performance in simulating the discharge coefficient of side weirs.

  7. Quantitative analysis of triazine herbicides in environmental samples by using high performance liquid chromatography and diode array detection combined with second-order calibration based on an alternating penalty trilinear decomposition algorithm.

    PubMed

    Li, Yuan-Na; Wu, Hai-Long; Qing, Xiang-Dong; Li, Quan; Li, Shu-Fang; Fu, Hai-Yan; Yu, Yong-Jie; Yu, Ru-Qin

    2010-09-23

    A novel application of second-order calibration method based on an alternating penalty trilinear decomposition (APTLD) algorithm is presented to treat the data from high performance liquid chromatography-diode array detection (HPLC-DAD). The method makes it possible to accurately and reliably analyze atrazine (ATR), ametryn (AME) and prometryne (PRO) contents in soil, river sediment and wastewater samples. Satisfactory results are obtained although the elution and spectral profiles of the analytes are heavily overlapped with the background in environmental samples. The obtained average recoveries for ATR, AME and PRO are 99.7±1.5, 98.4±4.7 and 97.0±4.4% in soil samples, 100.1±3.2, 100.7±3.4 and 96.4±3.8% in river sediment samples, and 100.1±3.5, 101.8±4.2 and 101.4±3.6% in wastewater samples, respectively. Furthermore, the accuracy and precision of the proposed method are evaluated with the elliptical joint confidence region (EJCR) test. It lights a new avenue to determine quantitatively herbicides in environmental samples with a simple pretreatment procedure and provides the scientific basis for an improved environment management through a better understanding of the wastewater-soil-river sediment system as a whole.

  8. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

    NASA Astrophysics Data System (ADS)

    Siegel, J.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!

  9. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ˜170 automatically recognized objects is approximately 95%. The results demonstrate

  10. Parameter identification for continuous point emission source based on Tikhonov regularization method coupled with particle swarm optimization algorithm.

    PubMed

    Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun

    2017-03-05

    In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission

  11. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov-Galerkin method.

    PubMed

    Doha, E H; Abd-Elhameed, W M; Youssri, Y H

    2015-09-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov-Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient.

  12. Maximum power point tracking algorithm based on sliding mode and fuzzy logic for photovoltaic sources under variable environmental conditions

    NASA Astrophysics Data System (ADS)

    Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.

    2017-02-01

    Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.

  13. Algorithms for Collision Detection Between a Point and a Moving Polygon, with Applications to Aircraft Weather Avoidance

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Hagen, George

    2016-01-01

    This paper proposes mathematical definitions of functions that can be used to detect future collisions between a point and a moving polygon. The intended application is weather avoidance, where the given point represents an aircraft and bounding polygons are chosen to model regions with bad weather. Other applications could possibly include avoiding other moving obstacles. The motivation for the functions presented here is safety, and therefore they have been proved to be mathematically correct. The functions are being developed for inclusion in NASA's Stratway software tool, which allows low-fidelity air traffic management concepts to be easily prototyped and quickly tested.

  14. Improvement of registration accuracy in accelerated partial breast irradiation using the point-based rigid-body registration algorithm for patients with implanted fiducial markers

    SciTech Connect

    Inoue, Minoru; Yoshimura, Michio Sato, Sayaka; Nakamura, Mitsuhiro; Yamada, Masahiro; Hirata, Kimiko; Ogura, Masakazu; Hiraoka, Masahiro; Sasaki, Makoto; Fujimoto, Takahiro

    2015-04-15

    Purpose: To investigate image-registration errors when using fiducial markers with a manual method and the point-based rigid-body registration (PRBR) algorithm in accelerated partial breast irradiation (APBI) patients, with accompanying fiducial deviations. Methods: Twenty-two consecutive patients were enrolled in a prospective trial examining 10-fraction APBI. Titanium clips were implanted intraoperatively around the seroma in all patients. For image-registration, the positions of the clips in daily kV x-ray images were matched to those in the planning digitally reconstructed radiographs. Fiducial and gravity registration errors (FREs and GREs, respectively), representing resulting misalignments of the edge and center of the target, respectively, were compared between the manual and algorithm-based methods. Results: In total, 218 fractions were evaluated. Although the mean FRE/GRE values for the manual and algorithm-based methods were within 3 mm (2.3/1.7 and 1.3/0.4 mm, respectively), the percentages of fractions where FRE/GRE exceeded 3 mm using the manual and algorithm-based methods were 18.8%/7.3% and 0%/0%, respectively. Manual registration resulted in 18.6% of patients with fractions of FRE/GRE exceeding 5 mm. The patients with larger clip deviation had significantly more fractions showing large FRE/GRE using manual registration. Conclusions: For image-registration using fiducial markers in APBI, the manual registration results in more fractions with considerable registration error due to loss of fiducial objectivity resulting from their deviation. The authors recommend the PRBR algorithm as a safe and effective strategy for accurate, image-guided registration and PTV margin reduction.

  15. Frequency-domain endoscopic diffuse optical tomography reconstruction algorithm based on dual-modulation-frequency and dual-points source diffuse equation

    NASA Astrophysics Data System (ADS)

    Qin, Zhuanping; Hou, Qiang; Zhao, Huijuan; Yang, Yanshuang; Zhou, Xiaoqing; Gao, Feng

    2013-03-01

    In this paper, frequency-domain endoscopic diffuse optical tomography image reconstruction algorithm based on dual-modulation-frequency and dual-points source diffuse equation is investigated for the reconstruction of the optical parameters including the absorption and reducing scattering coefficients. The forward problem is solved by the finite element method based on the frequency domain diffuse equation (FD-DE) for dual-points source approximation and multi-modulation-frequency. In the image reconstruction, a multi-modulation-frequency Newton-Raphson algorithm is applied to obtain the solution. To further improve the image accuracy and quality, a method based on the region of interest (ROI) is applied on the above procedures. The simulation is performed in the tubular model to verify the validity of the algorithm. Results show that the FD-DE with dual-points source approximate is more accuracy at shorter source-detector separation. The reconstruction with dual-modulation-frequency improves the image accuracy and quality compared to the results with single-modulation-frequency and triple-modulation-frequency method. The peak optical coefficients in ROI (ROI_max) are almost equivalent to the true optical coefficients with the relative error less than 6.67%. The full width at half maximum (FWHM) achieves 82% of the true radius. The contrast-to-noise ratio (CNR) and image coefficient(IC) is 5.678 and 26.962, respectively. Additionally, the results with the method based on ROI show that the ROI_max is equivalent to the true value. The FWHM can improve by 88% of the true radius. The CNR and IC is improved over 7.782 and 45.335, respectively.

  16. New Advances In Multiphase Flow Numerical Modelling Using A General Domain Decomposition and Non-orthogonal Collocated Finite Volume Algorithm: Application To Industrial Fluid Catalytical Cracking Process and Large Scale Geophysical Fluids.

    NASA Astrophysics Data System (ADS)

    Martin, R.; Gonzalez Ortiz, A.

    momentum exchange forces and the interphase heat exchanges are 1 treated implicitly to ensure stability. In order to reduce one more time the computa- tional cost, a decomposition of the global domain in N subdomains is introduced and all the previous algorithms applied to one block is performed in each block. At the in- terface between subdomains, an overlapping procedure is used. Another advantage is that different sets of equations can be solved in each block like fluid/structure interac- tions for instance. We show here the hydrodynamics of a two-phase flow in a vertical conduct as in industrial plants of fluid catalytical cracking processes with a complex geometry. With an initial Richardson number of 0.16 slightly higher than the critical Richardson number of 0.1, particles and water vapor are injected at the bottom of the riser. Countercurrents appear near the walls and gravity effects begin to dominate in- ducing an increase of particulate volumic fractions near the walls. We show here the hydrodynamics for 13s. 2

  17. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  18. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  19. Chemometrics-enhanced high performance liquid chromatography-diode array detection strategy for simultaneous determination of eight co-eluted compounds in ten kinds of Chinese teas using second-order calibration method based on alternating trilinear decomposition algorithm.

    PubMed

    Yin, Xiao-Li; Wu, Hai-Long; Gu, Hui-Wen; Zhang, Xiao-Hua; Sun, Yan-Mei; Hu, Yong; Liu, Lu; Rong, Qi-Ming; Yu, Ru-Qin

    2014-10-17

    In this work, an attractive chemometrics-enhanced high performance liquid chromatography-diode array detection (HPLC-DAD) strategy was proposed for simultaneous and fast determination of eight co-eluted compounds including gallic acid, caffeine and six catechins in ten kinds of Chinese teas by using second-order calibration method based on alternating trilinear decomposition (ATLD) algorithm. This new strategy proved to be a useful tool for handling the co-eluted peaks, uncalibrated interferences and baseline drifts existing in the process of chromatographic separation, which benefited from the "second-order advantages", making the determination of gallic acid, caffeine and six catechins in tea infusions within 8 min under a simple mobile phase condition. The average recoveries of the analytes on two selected tea samples ranged from 91.7 to 103.1% with standard deviations (SD) ranged from 1.9 to 11.9%. Figures of merit including sensitivity (SEN), selectivity (SEL), root-mean-square error of prediction (RMSEP) and limit of detection (LOD) have been calculated to validate the accuracy of the proposed method. To further confirm the reliability of the method, a multiple reaction monitoring (MRM) method based on LC-MS/MS was employed for comparison and the obtained results of both methods were consistent with each other. Furthermore, as a universal strategy, this new proposed analytical method was applied for the determination of gallic acid, caffeine and catechins in several other kinds of Chinese teas, including different levels and varieties. Finally, based on the quantitative results, principal component analysis (PCA) was used to conduct a cluster analysis for these Chinese teas. The green tea, Oolong tea and Pu-erh raw tea samples were classified successfully. All results demonstrated that the proposed method is accurate, sensitive, fast, universal and ideal for the rapid, routine analysis and discrimination of gallic acid, caffeine and catechins in Chinese tea

  20. Multi-targeted interference-free determination of ten β-blockers in human urine and plasma samples by alternating trilinear decomposition algorithm-assisted liquid chromatography-mass spectrometry in full scan mode: comparison with multiple reaction monitoring.

    PubMed

    Gu, Hui-Wen; Wu, Hai-Long; Yin, Xiao-Li; Li, Yong; Liu, Ya-Juan; Xia, Hui; Zhang, Shu-Rong; Jin, Yi-Feng; Sun, Xiao-Dong; Yu, Ru-Qin; Yang, Peng-Yuan; Lu, Hao-Jie

    2014-10-27

    β-blockers are the first-line therapeutic agents for treating cardiovascular diseases and also a class of prohibited substances in athletic competitions. In this work, a smart strategy that combines three-way liquid chromatography-mass spectrometry (LC-MS) data with second-order calibration method based on alternating trilinear decomposition (ATLD) algorithm was developed for simultaneous determination of ten β-blockers in human urine and plasma samples. This flexible strategy proved to be a useful tool to solve the problems of overlapped peaks and uncalibrated interferences encountered in quantitative LC-MS, and made the multi-targeted interference-free qualitative and quantitative analysis of β-blockers in complex matrices possible. The limits of detection were in the range of 2.0×10(-5)-6.2×10(-3) μg mL(-1), and the average recoveries were between 90 and 110% with standard deviations and average relative prediction errors less than 10%, indicating that the strategy could provide satisfactory prediction results for ten β-blockers in human urine and plasma samples only using liquid chromatography hyphenated single-quadrupole mass spectrometer in full scan mode. To further confirm the feasibility and reliability of the proposed method, the same batch samples were analyzed by multiple reaction monitoring (MRM) method. T-test demonstrated that there are no significant differences between the prediction results of the two methods. Considering the advantages of fast, low-cost, high sensitivity, and no need of complicated chromatographic and tandem mass spectrometric conditions optimization, the proposed strategy is expected to be extended as an attractive alternative method to quantify analyte(s) of interest in complex systems such as cells, biological fluids, food, environment, pharmaceuticals and other complex samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; O'Brien, Ricky T.; Rugaard Poulsen, Per; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  2. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  3. A new eddy-covariance method using empirical mode decomposition

    USDA-ARS?s Scientific Manuscript database

    We introduce a new eddy-covariance method that uses a spectral decomposition algorithm called empirical mode decomposition. The technique is able to calculate contributions to near-surface fluxes from different periodic components. Unlike traditional Fourier methods, this method allows for non-ortho...

  4. 3D shape decomposition and comparison for gallbladder modeling

    NASA Astrophysics Data System (ADS)

    Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen

    2011-03-01

    This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.

  5. Retrieval of Knowledge through Algorithmic Decomposition

    DTIC Science & Technology

    1990-06-01

    regulatory requirements. It has been given no primary distribution other than to DTIC and will be available only through DTIC or the National Technical...Effective querying of the system in the latter case requires a careful structuring of the user’s information requirements, the absence of which can lead...intuitively divine an estimate that seems reasonable in light of whatever knowledge comes to mind. This wholistic approach to estimation relies

  6. Error reduction in EMG signal decomposition.

    PubMed

    Kline, Joshua C; De Luca, Carlo J

    2014-12-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization.

  7. Error reduction in EMG signal decomposition

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159

  8. Autonomous Gaussian Decomposition

    NASA Astrophysics Data System (ADS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-04-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  9. AUTONOMOUS GAUSSIAN DECOMPOSITION

    SciTech Connect

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Dickey, John

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  10. Interpolatory fixed-point algorithm for an efficient computation of TE and TM modes in arbitrary 1D structures at oblique incidence

    NASA Astrophysics Data System (ADS)

    Pérez Molina, Manuel; Francés Monllor, Jorge; Álvarez López, Mariela; Neipp López, Cristian; Carretero López, Luis

    2010-05-01

    We develop the Interpolatory Fixed-Point Algorithm (IFPA) to compute efficiently the TE and TM reflectance and transmittance coefficients for arbitrary 1D structures at oblique incidence. For this purpose, we demonstrate that the semi-analytical solutions of the Helmholtz equation provided by the fixed-point method have a polynomial dependence on variables that are related to the essential electromagnetic parameters -incidence angle and wavelength-, which allows a drastic simplification of the required calculations taking the advantage of interpolation for a few parameter values. The first step to develop the IFPA consists of stating the Helmholtz equation and boundary conditions for TE and TM plane incident waves on a 1D finite slab with an arbitrary permittivity profile surrounded by two homogeneous media. The Helmholtz equation and boundary conditions are then transformed into a second-order initial value problem which is written in terms of transfer matrices. By applying the fixed-point method, the coefficients of such transfer matrices are obtained as polynomials on several variables that can be characterized by a reduced set of interpolating parameters. We apply the IFPA to specific examples of 1D diffraction gratings, optical rugate filters and quasi-periodic structures, for which precise solutions for the TE and TM modes are efficiently obtained by computing less than 20 interpolating parameters.

  11. Application of the nonlinear time series prediction method of genetic algorithm for forecasting surface wind of point station in the South China Sea with scatterometer observations

    NASA Astrophysics Data System (ADS)

    Zhong, Jian; Dong, Gang; Sun, Yimei; Zhang, Zhaoyang; Wu, Yuqin

    2016-11-01

    The present work reports the development of nonlinear time series prediction method of genetic algorithm (GA) with singular spectrum analysis (SSA) for forecasting the surface wind of a point station in the South China Sea (SCS) with scatterometer observations. Before the nonlinear technique GA is used for forecasting the time series of surface wind, the SSA is applied to reduce the noise. The surface wind speed and surface wind components from scatterometer observations at three locations in the SCS have been used to develop and test the technique. The predictions have been compared with persistence forecasts in terms of root mean square error. The predicted surface wind with GA and SSA made up to four days (longer for some point station) in advance have been found to be significantly superior to those made by persistence model. This method can serve as a cost-effective alternate prediction technique for forecasting surface wind of a point station in the SCS basin. Project supported by the National Natural Science Foundation of China (Grant Nos. 41230421 and 41605075) and the National Basic Research Program of China (Grant No. 2013CB430101).

  12. Development and Positioning Accuracy Assessment of Single-Frequency Precise Point Positioning Algorithms by Combining GPS Code-Pseudorange Measurements with Real-Time SSR Corrections

    PubMed Central

    Kim, Miso; Park, Kwan-Dong

    2017-01-01

    We have developed a suite of real-time precise point positioning programs to process GPS pseudorange observables, and validated their performance through static and kinematic positioning tests. To correct inaccurate broadcast orbits and clocks, and account for signal delays occurring from the ionosphere and troposphere, we applied State Space Representation (SSR) error corrections provided by the Seoul Broadcasting System (SBS) in South Korea. Site displacements due to solid earth tide loading are also considered for the purpose of improving the positioning accuracy, particularly in the height direction. When the developed algorithm was tested under static positioning, Kalman-filtered solutions produced a root-mean-square error (RMSE) of 0.32 and 0.40 m in the horizontal and vertical directions, respectively. For the moving platform, the RMSE was found to be 0.53 and 0.69 m in the horizontal and vertical directions. PMID:28598403

  13. Evaluation of image reconstruction algorithms encompassing Time-Of-Flight and Point Spread Function modelling for quantitative cardiac PET: phantom studies.

    PubMed

    Presotto, L; Gianolli, L; Gilardi, M C; Bettinardi, V

    2015-04-01

    To perform kinetic modelling quantification, PET dynamic data must be acquired in short frames, where different critical conditions are met. The accuracy of reconstructed images influences quantification. The added value of Time-Of-Flight (TOF) and Point Spread Function (PSF) in cardiac image reconstruction was assessed. A static phantom was used to simulate two extreme conditions: (i) the bolus passage and (ii) the steady uptake. Various count statistics and independent noise realisations were considered. A moving phantom filled with two different radionuclides was used to simulate: (i) a great range of contrasts and (ii) the cardio/respiratory motion. Analytical and iterative reconstruction (IR) algorithms also encompassing TOF and PSF modelling were evaluated. Both analytic and IR algorithms provided good results in all the evaluated conditions. The amount of bias introduced by IR was found to be limited. TOF allowed faster convergence and lower noise levels. PSF achieved near full myocardial activity recovery in static conditions. Motion degraded performances, but the addition of both TOF and PSF maintained the best overall behaviour. IR accounting for TOF and PSF can be recommended for the quantification of dynamic cardiac PET studies as they improve the results compared to analytic and standard IR.

  14. Automatic Image Decomposition

    DTIC Science & Technology

    2004-02-01

    optimal selection. Keywords: Image decomposition, structure, texture, bounded vari- ation, parameter selection, inpainting . 1. INTRODUCTION Natural images...or DC gray-values, etc. This decomposition has been shown in [6] to be fundamental for image inpainting , the art of modifying an image in a non...tech- nique exploited in [6] for image inpainting (see also [1, 9, 12, 14] for other related decomposition approaches). As we will see bel- low, there

  15. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  16. Image encryption using P-Fibonacci transform and decomposition

    NASA Astrophysics Data System (ADS)

    Zhou, Yicong; Panetta, Karen; Agaian, Sos; Chen, C. L. Philip

    2012-03-01

    Image encryption is an effective method to protect images or videos by transferring them into unrecognizable formats for different security purposes. To improve the security level of bit-plane decomposition based encryption approaches, this paper introduces a new image encryption algorithm by using a combination of parametric bit-plane decomposition along with bit-plane shuffling and resizing, pixel scrambling and data mapping. The algorithm utilizes the Fibonacci P-code for image bit-plane decomposition and the 2D P-Fibonacci transform for image encryption because they are parameter dependent. Any new or existing method can be used for shuffling the order of the bit-planes. Simulation analysis and comparisons are provided to demonstrate the algorithm's performance for image encryption. Security analysis shows the algorithm's ability against several common attacks. The algorithm can be used to encrypt images, biometrics and videos.

  17. Decomposition of Sodium Tetraphenylborate

    SciTech Connect

    Barnes, M.J.

    1998-11-20

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability.

  18. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  19. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  20. Repeated decompositions reveal the stability of infomax decomposition of fMRI data

    PubMed Central

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2010-01-01

    In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453

  1. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart.

  2. Linear decomposition approach for a class of nonconvex programming problems.

    PubMed

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  3. Automatic classification of visual evoked potentials based on wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz

    2017-04-01

    Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.

  4. Analysis and Application of LIDAR Waveform Data Using a Progressive Waveform Decomposition Method

    NASA Astrophysics Data System (ADS)

    Zhu, J.; Zhang, Z.; Hu, X.; Li, Z.

    2011-09-01

    Due to rich information of a full waveform of airborne LiDAR (light detection and ranging) data, the analysis of full waveform has been an active area in LiDAR application. It is possible to digitally sample and store the entire reflected waveform of small-footprint instead of only discrete point clouds. Decomposition of waveform data, a key step in waveform data analysis, can be categorized to two typical methods: 1) the Gaussian modelling method such as the Non-linear least-squares (NLS) algorithm and the maximum likelihood estimation using the Exception Maximization (EM) algorithm. 2) pulse detection method—Average Square Difference Function (ASDF). However, the Gaussian modelling methods strongly rely on initial parameters, whereas the ASDF omits the importance of parameter information of the waveform. In this paper, we proposed a fast algorithm—Progressive Waveform Decomposition (PWD) method to extract local maxims and fit the echo with Gaussian function, and calculate other parameters from the raw waveform data. On the one hand, experiments are implemented to evaluate the PWD method and the results demonstrate its robustness and efficiency. On the other hand, with the PWD parametric analysis of the full-waveform instead of a 3D point cloud, some special applications are investigated afterward.

  5. Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts.

    PubMed

    Pontifex, Matthew B; Gwizdala, Kathryn L; Parks, Andrew C; Billinger, Martin; Brunner, Clemens

    2017-03-01

    Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies.

  6. Generating functions for tensor product decomposition

    NASA Astrophysics Data System (ADS)

    Fuksa, Jan; Pošta, Severin

    2013-11-01

    The paper deals with the tensor product decomposition problem. Tensor product decompositions are of great importance in the quantum physics. A short outline of the state of the art for the of semisimple Lie groups is mentioned. The generality of generating functions is used to solve tensor products. The corresponding generating function is rational. The feature of this technique lies in the fact that the decompositions of all tensor products of all irreducible representations are solved simultaneously. Obtaining the generating function is a difficult task in general. We propose some changes to an algorithm using Patera-Sharp character generators to find this generating function, which simplifies the whole problem to simple operations over rational functions.

  7. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  8. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  9. Robust material decomposition for spectral CT

    NASA Astrophysics Data System (ADS)

    Clark, D. P.; Johnson, G. A.; Badea, C. T.

    2014-03-01

    There is ongoing interest in extending CT from anatomical to functional imaging. Recent successes with dual energy CT, the introduction of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents enable functional imaging capabilities via spectral CT. However, many challenges related to radiation dose, photon flux, and sensitivity still must be overcome. Here, we introduce a post-reconstruction algorithm called spectral diffusion that performs a robust material decomposition of spectral CT data in the presence of photon noise to address these challenges. Specifically, we use spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased relative to the source data. Spectral diffusion integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms. Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations.

  10. Dominant modal decomposition method

    NASA Astrophysics Data System (ADS)

    Dombovari, Zoltan

    2017-03-01

    The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.

  11. Comments on the "Meshless Helmholtz-Hodge decomposition".

    PubMed

    Bhatia, Harsh; Norgard, Gregory; Pascucci, Valerio; Bremer, Peer-Timo

    2013-03-01

    The Helmholtz-Hodge decomposition (HHD) is one of the fundamental theorems of fluids describing the decomposition of a flow field into its divergence-free, curl-free, and harmonic components. Solving for the HHD is intimately connected to the choice of boundary conditions which determine the uniqueness and orthogonality of the decomposition. This article points out that one of the boundary conditions used in a recent paper "Meshless Helmholtz-Hodge Decomposition" is, in general, invalid and provides an analytical example demonstrating the problem. We hope that this clarification on the theory will foster further research in this area and prevent undue problems in applying and extending the original approach.

  12. Matching pursuit parallel decomposition of seismic data

    NASA Astrophysics Data System (ADS)

    Li, Chuanhui; Zhang, Fanchang

    2017-07-01

    In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.

  13. Domain decomposition for implicit solvation models.

    PubMed

    Cancès, Eric; Maday, Yvon; Stamm, Benjamin

    2013-08-07

    This article is the first of a series of papers dealing with domain decomposition algorithms for implicit solvent models. We show that, in the framework of the COSMO model, with van der Waals molecular cavities and classical charge distributions, the electrostatic energy contribution to the solvation energy, usually computed by solving an integral equation on the whole surface of the molecular cavity, can be computed more efficiently by using an integral equation formulation of Schwarz's domain decomposition method for boundary value problems. In addition, the so-obtained potential energy surface is smooth, which is a critical property to perform geometry optimization and molecular dynamics simulations. The purpose of this first article is to detail the methodology, set up the theoretical foundations of the approach, and study the accuracies and convergence rates of the resulting algorithms. The full efficiency of the method and its applicability to large molecular systems of biological interest is demonstrated elsewhere.

  14. Clustering of Multispectral Airborne Laser Scanning Data Using Gaussian Decomposition

    NASA Astrophysics Data System (ADS)

    Morsy, S.; Shaker, A.; El-Rabbany, A.

    2017-09-01

    With the evolution of the LiDAR technology, multispectral airborne laser scanning systems are currently available. The first operational multispectral airborne LiDAR sensor, the Optech Titan, acquires LiDAR point clouds at three different wavelengths (1.550, 1.064, 0.532 μm), allowing the acquisition of different spectral information of land surface. Consequently, the recent studies are devoted to use the radiometric information (i.e., intensity) of the LiDAR data along with the geometric information (e.g., height) for classification purposes. In this study, a data clustering method, based on Gaussian decomposition, is presented. First, a ground filtering mechanism is applied to separate non-ground from ground points. Then, three normalized difference vegetation indices (NDVIs) are computed for both non-ground and ground points, followed by histograms construction from each NDVI. The Gaussian function model is used to decompose the histograms into a number of Gaussian components. The maximum likelihood estimate of the Gaussian components is then optimized using Expectation - Maximization algorithm. The intersection points of the adjacent Gaussian components are subsequently used as threshold values, whereas different classes can be clustered. This method is used to classify the terrain of an urban area in Oshawa, Ontario, Canada, into four main classes, namely roofs, trees, asphalt and grass. It is shown that the proposed method has achieved an overall accuracy up to 95.1 % using different NDVIs.

  15. Application of modified Martinez-Silva algorithm in determination of net cover

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Łukasz; Grobelna, Iwona

    2016-12-01

    In the article we present the idea of modifications of Martinez-Silva algorithm, which allows for determination of place invariants (p-invariants) of Petri net. Their generation time is important in the parallel decomposition of discrete systems described by Petri nets. Decomposition process is essential from the point of view of discrete system design, as it allows for separation of smaller sequential parts. The proposed modifications of Martinez-Silva method concern the net cover by p-invariants and are focused on two important issues: cyclic reduction of invariant matrix and cyclic checking of net cover.

  16. Splitting algorithms for the wavelet transform of first-degree splines on nonuniform grids

    NASA Astrophysics Data System (ADS)

    Shumilov, B. M.

    2016-07-01

    For the splines of first degree with nonuniform knots, a new type of wavelets with a biased support is proposed. Using splitting with respect to the even and odd knots, a new wavelet decomposition algorithm in the form of the solution of a three-diagonal system of linear algebraic equations with respect to the wavelet coefficients is proposed. The application of the proposed implicit scheme to the point prediction of time series is investigated for the first time. Results of numerical experiments on the prediction accuracy and the compression of spline wavelet decompositions are presented.

  17. The comparison of algorithms for key points extraction in simplification of hybrid digital terrain models. (Polish Title: Porównanie algorytmów ekstrakcji punktów istotnych w upraszczaniu numerycznych modeli terenu o strukturze hybrydowej)

    NASA Astrophysics Data System (ADS)

    Bakuła, K.

    2014-12-01

    The presented research concerns methods related to reduction of elevation data contained in digital terrain model (DTM) from airborne laser scanning (ALS) in hydraulic modelling. The reduction is necessary in the preparation of large datasets of geospatial data describing terrain re lief. Its course should not be associated with regular data filtering, which o ften occurs in practice. Such a method leads to a number of important forms important for hydraulic modeling being missed. One of the proposed solutions for the reduction of elevation data contained in DTM is to change the regular grid into the hybrid structure with regularly distributed points and irregularly located critical points. The purpose of this paper is to compare algorithms for extracting these key points from DTM. They are used in hybrid mod el generation as a part of elevation data reduction process that retains DTM accuracy and reduces the size of output files. In experiments, the following algorithms were tested: Topographic Position Index (TPI), Very Important Points (VIP) and Z - tolerance. Their effectiveness in reduction (maintaining the accuracy and reducing datasets) was evaluated in respect to input DTM from ALS. The best results were obtained for the Z - tolerance algorithm, but they do not diminish the capabilities of the other two algorithms: VIP and TPI which can generalize DTM quite well. The results confirm the possibility of obtaining a high degree of reduction reaching only a few percent of the input data with a relatively low decrease of vertical DTM accuracy to a few centimetres.

  18. Parallel CE/SE Computations via Domain Decomposition

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung

    2000-01-01

    This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.

  19. Analyzing algorithms for nonlinear and spatially nonuniform phase shifts in the liquid crystal point diffraction interferometer. 1998 summer research program for high school juniors at the University of Rochester`s Laboratory for Laser Energetics: Student research reports

    SciTech Connect

    Jain, N.

    1999-03-01

    Phase-shifting interferometry has many advantages, and the phase shifting nature of the Liquid Crystal Point Diffraction Interferometer (LCPDI) promises to provide significant improvement over other current OMEGA wavefront sensors. However, while phase-shifting capabilities improve its accuracy as an interferometer, phase-shifting itself introduces errors. Phase-shifting algorithms are designed to eliminate certain types of phase-shift errors, and it is important to chose an algorithm that is best suited for use with the LCPDI. Using polarization microscopy, the authors have observed a correlation between LC alignment around the microsphere and fringe behavior. After designing a procedure to compare phase-shifting algorithms, they were able to predict the accuracy of two particular algorithms through computer modeling of device-specific phase shift-errors.

  20. Point set registration: coherent point drift.

    PubMed

    Myronenko, Andriy; Song, Xubo

    2010-12-01

    Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.

  1. Decomposition of small-footprint full waveform LiDAR data based on generalized Gaussian model and grouping LM optimization

    NASA Astrophysics Data System (ADS)

    Ma, Hongchao; Zhou, Weiwei; Zhang, Liang; Wang, Suyuan

    2017-04-01

    Full waveform airborne Light Detection And Ranging(LiDAR) data contains abundant information which may overcome some deficiencies of discrete LiDAR point cloud data provided by conventional LiDAR systems. Processing full waveform data to extract more information than coordinate values alone is of great significance for potential applications. The Levenberg-Marquardt (LM) algorithm is a traditional method used to estimate parameters of a Gaussian model when Gaussian decomposition of full waveform LiDAR data is performed. This paper employs the generalized Gaussian mixture function to fit a waveform, and proposes using the grouping LM algorithm to optimize the parameters of the function. It is shown that the grouping LM algorithm overcomes the common drawbacks which arise from the conventional LM for parameter optimization, such as the final results being influenced by the initial parameters, possible algorithm interruption caused by non-numerical elements that occurred in the Jacobian matrix, etc. The precision of the point cloud generated by the grouping LM is evaluated by comparing it with those provided by the LiDAR system and those generated by the conventional LM. Results from both simulation and real data show that the proposed algorithm can generate a higher-quality point cloud, in terms of point density and precision, and can extract other information, such as echo location, pulse width, etc., more precisely as well.

  2. Reactivity of fluoroalkanes in reactions of coordinated molecular decomposition

    NASA Astrophysics Data System (ADS)

    Pokidova, T. S.; Denisov, E. T.

    2017-08-01

    Experimental results on the coordinated molecular decomposition of RF fluoroalkanes to olefin and HF are analyzed using the model of intersecting parabolas (IPM). The kinetic parameters are calculated to allow estimates of the activation energy ( E) and rate constant ( k) of these reactions, based on enthalpy and IPM algorithms. Parameters E and k are found for the first time for eight RF decomposition reactions. The factors that affect activation energy E of RF decomposition (the enthalpy of the reaction, the electronegativity of the atoms of reaction centers, and the dipole-dipole interaction of polar groups) are determined. The values of E and k for reverse reactions of addition are estimated.

  3. Feature based volume decomposition for automatic hexahedral mesh generation

    SciTech Connect

    LU,YONG; GADH,RAJIT; TAUTGES,TIMOTHY J.

    2000-02-21

    Much progress has been made through these years to achieve automatic hexahedral mesh generation. While general meshing algorithms that can take on general geometry are not there yet; many well-proven automatic meshing algorithms now work on certain classes of geometry. This paper presents a feature based volume decomposition approach for automatic Hexahedral Mesh generation. In this approach, feature recognition techniques are introduced to determine decomposition features from a CAD model. The features are then decomposed and mapped with appropriate automatic meshing algorithms suitable for the correspondent geometry. Thus a formerly unmeshable CAD model may become meshable. The procedure of feature decomposition is recursive: sub-models are further decomposed until either they are matched with appropriate meshing algorithms or no more decomposition features are detected. The feature recognition methods employed are convexity based and use topology and geometry information, which is generally available in BREP solid models. The operations of volume decomposition are also detailed in the paper. The final section, the capability of the feature decomposer is demonstrated over some complicated manufactured parts.

  4. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this

  5. Algorithms used for read-out optical system pointing to multiplexed computer generated 1D-Fourier holograms and decoding the encrypted information

    NASA Astrophysics Data System (ADS)

    Donchenko, Sergey S.; Odinokov, Sergey B.; Betin, Alexandr U.; Hanevich, Pavel; Semishko, Sergey; Zlokazov, Evgenii Y.

    2017-05-01

    The holographic disk reading device for recovery of CGFH is described. Principle of its work is shown. Analyzed approaches for developing algorithms, used in this device: guidance and decoding. Listed results of experimental researches.

  6. Boundary control problem of linear Stokes equation with point observations

    SciTech Connect

    Ding, Z.

    1994-12-31

    We will discuss the linear quadratic regulator problems (LQR) of linear Stokes system with point observations on the boundary and box constraints on the boundary control. By using hydropotential theory, we proved that the LQR problems without box constraint on the control do not admit any non trivial solution, while the LQR problems with box constraints have a unique solution. The optimal control is given explicitly, and its singular behaviors are displayed explicitly through a decomposition formula. Based upon the characteristic formula of the optimal control, a generic numerical algorithm is given for solving the box constrained LQR problems.

  7. Contrast Enhancement Based on Intrinsic Image Decomposition.

    PubMed

    Yue, Huanjing; Yang, Jingyu; Sun, Xiaoyan; Wu, Feng; Hou, Chunping

    2017-05-10

    In this paper, we propose to introduce intrinsic image decomposition priors into decomposition models for contrast enhancement. Since image decomposition is a highly ill-posed problem, we introduce constraints on both reflectance and illumination layers to yield a highly reliable solution. We regularize the reflectance layer to be piecewise constant by introducing a weighted `1 norm constraint on neighboring pixels according to the color similarity, so that the decomposed reflectance would not be affected much by the illumination information. The illumination layer is regularized by a piecewise smoothness constraint. The proposed model is effectively solved by the Split Bregman algorithm. Then, by adjusting the illumination layer, we obtain the enhancement result. To avoid potential color artifacts introduced by illumination adjusting and reduce computing complexity, the proposed decomposition model is performed on the value channel in HSV space. Experiment results demonstrate that the proposed method performs well for a wide variety of images, and achieves better or comparable subjective and objective quality compared with state-of-the-art methods.

  8. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  9. Tensor Decomposition for Signal Processing and Machine Learning

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Nicholas D.; De Lathauwer, Lieven; Fu, Xiao; Huang, Kejun; Papalexakis, Evangelos E.; Faloutsos, Christos

    2017-07-01

    Tensors or {\\em multi-way arrays} are functions of three or more indices $(i,j,k,\\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r,c)$ for (row,column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth {\\em and depth} that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.

  10. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  11. Hydrazine decomposition and other reactions

    NASA Technical Reports Server (NTRS)

    Armstrong, Warren E. (Inventor); La France, Donald S. (Inventor); Voge, Hervey H. (Inventor)

    1978-01-01

    This invention relates to the catalytic decomposition of hydrazine, catalysts useful for this decomposition and other reactions, and to reactions in hydrogen atmospheres generally using carbon-containing catalysts.

  12. Decomposing Current Mortality Differences Into Initial Differences and Differences in Trends: The Contour Decomposition Method.

    PubMed

    Jdanov, Dmitri A; Shkolnikov, Vladimir M; van Raalte, Alyson A; Andreev, Evgeny M

    2017-08-01

    This study proposes a new decomposition method that permits a difference in an aggregate measure at a final time point to be split into additive components corresponding to the initial differences in the event rates of the measure and differences in trends in these underlying event rates. For instance, when studying divergence in life expectancy, this method allows researchers to more easily contrast age-specific mortality trends between populations by controlling for initial age-specific mortality differences. Two approaches are assessed: (1) an additive change method that uses logic similar to cause-of-death decomposition, and (2) a contour decomposition method that extends the stepwise replacement algorithm along an age-period demographic contour. The two approaches produce similar results, but the contour method is more widely applicable. We provide a full description of the contour replacement method and examples of its application to life expectancy and lifetime disparity differences between the United States and England and Wales in the period 1980-2010.

  13. Volume Decomposition and Feature Recognition for Hexahedral Mesh Generation

    SciTech Connect

    GADH,RAJIT; LU,YONG; TAUTGES,TIMOTHY J.

    1999-09-27

    Considerable progress has been made on automatic hexahedral mesh generation in recent years. Several automatic meshing algorithms have proven to be very reliable on certain classes of geometry. While it is always worth pursuing general algorithms viable on more general geometry, a combination of the well-established algorithms is ready to take on classes of complicated geometry. By partitioning the entire geometry into meshable pieces matched with appropriate meshing algorithm the original geometry becomes meshable and may achieve better mesh quality. Each meshable portion is recognized as a meshing feature. This paper, which is a part of the feature based meshing methodology, presents the work on shape recognition and volume decomposition to automatically decompose a CAD model into meshable volumes. There are four phases in this approach: (1) Feature Determination to extinct decomposition features, (2) Cutting Surfaces Generation to form the ''tailored'' cutting surfaces, (3) Body Decomposition to get the imprinted volumes; and (4) Meshing Algorithm Assignment to match volumes decomposed with appropriate meshing algorithms. The feature determination procedure is based on the CLoop feature recognition algorithm that is extended to be more general. Results are demonstrated over several parts with complicated topology and geometry.

  14. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    SciTech Connect

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-15

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart–Thomas–Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code.

  15. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  16. Embedding color watermarks in color images based on Schur decomposition

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a blind dual color image watermarking scheme based on Schur decomposition is introduced. This is the first time to use Schur decomposition to embed color image watermark in color host image, which is different from using the binary image as watermark. By analyzing the 4 × 4 unitary matrix U via Schur decomposition, we can find that there is a strong correlation between the second row first column element and the third row first column element. This property can be explored for embedding watermark and extracting watermark in the blind manner. Since Schur decomposition is an intermediate step in SVD decomposition, the proposed method requires less number of computations. Experimental results show that the proposed scheme is robust against most common attacks including JPEG lossy compression, JPEG 2000 compression, low-pass filtering, cropping, noise addition, blurring, rotation, scaling and sharpening et al. Moreover, the proposed algorithm outperforms the closely related SVD-based algorithm and the spatial-domain algorithm.

  17. Yield-aware decomposition for LELE double patterning

    NASA Astrophysics Data System (ADS)

    Kohira, Yukihide; Yokoyama, Yoko; Kodama, Chikaaki; Takahashi, Atsushi; Nojima, Shigeki; Tanaka, Satoshi

    2014-03-01

    In this paper, we propose a fast layout decomposition algorithm in litho-etch-litho-etch (LELE) type double patterning considering the yield. Our proposed algorithm extracts stitch candidates properly from complex layouts including various patterns, line widths and pitches. The planarity of the conflict graph and independence of stitch-candidates are utilized to obtain a layout decomposition with minimum cost efficiently for higher yield. The validity of our proposed algorithm is confirmed by using benchmark layout patterns used in literatures as well as layout patterns generated to fit the target manufacturing technologies as much as possible. In our experiments, our proposed algorithm is 7.7 times faster than an existing method on average.

  18. Fast algorithm of byte-to-byte wavelet transform for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Sossa Azuela, Juan H.; Ramirez, Pablo M.

    2002-11-01

    A new fast algorithm of 2D DWT transform is presented. The algorithm operates on byte represented images and performs image transformation with the Cohen-Daubechies-Feauveau wavelet of the second order. It uses the lifting scheme for the calculations. The proposed algorithm is based on the "checkerboard" computation scheme for non-separable 2D wavelet. The problem of data extension near the image borders is resolved computing 1D Haar wavelet in the vicinity of the borders. With the checkerboard splitting, at each level of decomposition only one detail image is produced that simplify the further analysis for data compression. The calculations are rather simple, without any floating point operation allowing the implementation of the designed algorithm in fixed point DSP processors for fast, near real time processing. The proposed algorithm does not possesses perfect restoration of the processed data because of rounding that is introduced at each level of decomposition/restoration to perform operations with byte represented data. The designed algorithm was tested on different images. The criterion to estimate quantitatively the quality of the restored images was the well known PSNR. For the visual quality estimation the error maps between original and restored images were calculated. The obtained simulation results show that the visual and quantitative quality of the restored images is degraded with number of decomposition level increasing but is sufficiently high even after 6 levels. The introduced distortion are concentrated in the vicinity of high spatial activity details and are absent in the homogeneous regions. The designed algorithm can be used for image lossy compression and in noise suppression applications.

  19. Minimax eigenvector decomposition for data hiding

    NASA Astrophysics Data System (ADS)

    Davidson, Jennifer

    2005-09-01

    Steganography is the study of hiding information within a covert channel in order to transmit a secret message. Any public media such as image data, audio data, or even file packets, can be used as a covert channel. This paper presents an embedding algorithm that hides a message in an image using a technique based on a nonlinear matrix transform called the minimax eigenvector decomposition (MED). The MED is a minimax algebra version of the well-known singular value decomposition (SVD). Minimax algebra is a matrix algebra based on the algebraic operations of maximum and addition, developed initially for use in operations research and extended later to represent a class of nonlinear image processing operations. The discrete mathematical morphology operations of dilation and erosion, for example, are contained within minimax algebra. The MED is much quicker to compute than the SVD and avoids the numerical computational issues of the SVD because the operations involved only integer addition, subtraction, and compare. We present the algorithm to embed data using the MED, show examples applied to image data, and discuss limitations and advantages as compared with another similar algorithm.

  20. Problem decomposition and domain-based parallelism via group theoretic principles

    SciTech Connect

    Makai, M.; Orechwa, Y.

    1997-10-01

    A systematic approach based on group theoretic principles, is presented for the decomposition of the solution algorithm of boundary value problems specified over symmetric domains, which is amenable to implementation for parallel computation. The principles are applied to the linear transport equation in general, and the decomposition is demonstrated for a square node in particular.

  1. Composite structured mesh generation with automatic domain decomposition in complex geometries

    USDA-ARS?s Scientific Manuscript database

    This paper presents a novel automatic domain decomposition method to generate quality composite structured meshes in complex domains with arbitrary shapes, in which quality structured mesh generation still remains a challenge. The proposed decomposition algorithm is based on the analysis of an initi...

  2. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  3. Detailed Chemical Kinetic Modeling of Hydrazine Decomposition

    NASA Technical Reports Server (NTRS)

    Meagher, Nancy E.; Bates, Kami R.

    2000-01-01

    The purpose of this research project is to develop and validate a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. Hydrazine is used extensively in aerospace propulsion, and although liquid hydrazine is not considered detonable, many fuel handling systems create multiphase mixtures of fuels and fuel vapors during their operation. Therefore, a thorough knowledge of the decomposition chemistry of hydrazine under a variety of conditions can be of value in assessing potential operational hazards in hydrazine fuel systems. To gain such knowledge, a reasonable starting point is the development and validation of a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. A reasonably complete mechanism was published in 1996, however, many of the elementary steps included had outdated rate expressions and a thorough investigation of the behavior of the mechanism under a variety of conditions was not presented. The current work has included substantial revision of the previously published mechanism, along with a more extensive examination of the decomposition behavior of hydrazine. An attempt to validate the mechanism against the limited experimental data available has been made and was moderately successful. Further computational and experimental research into the chemistry of this fuel needs to be completed.

  4. Full-waveform LiDAR echo decomposition based on wavelet decomposition and particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Li, Duan; Xu, Lijun; Li, Xiaolu

    2017-04-01

    To measure the distances and properties of the objects within a laser footprint, a decomposition method for full-waveform light detection and ranging (LiDAR) echoes is proposed. In this method, firstly, wavelet decomposition is used to filter the noise and estimate the noise level in a full-waveform echo. Secondly, peak and inflection points of the filtered full-waveform echo are used to detect the echo components in the filtered full-waveform echo. Lastly, particle swarm optimization (PSO) is used to remove the noise-caused echo components and optimize the parameters of the most probable echo components. Simulation results show that the wavelet-decomposition-based filter is of the best improvement of SNR and decomposition success rates than Wiener and Gaussian smoothing filters. In addition, the noise level estimated using wavelet-decomposition-based filter is more accurate than those estimated using other two commonly used methods. Experiments were carried out to evaluate the proposed method that was compared with our previous method (called GS-LM for short). In experiments, a lab-build full-waveform LiDAR system was utilized to provide eight types of full-waveform echoes scattered from three objects at different distances. Experimental results show that the proposed method has higher success rates for decomposition of full-waveform echoes and more accurate parameters estimation for echo components than those of GS-LM. The proposed method based on wavelet decomposition and PSO is valid to decompose the more complicated full-waveform echoes for estimating the multi-level distances of the objects and measuring the properties of the objects in a laser footprint.

  5. A high resolution spectrum reconstruction algorithm using compressive sensing theory

    NASA Astrophysics Data System (ADS)

    Zheng, Zhaoyu; Liang, Dakai; Liu, Shulin; Feng, Shuqing

    2015-07-01

    This paper proposes a quick spectrum scanning and reconstruction method using compressive sensing in composite structure. The strain field of corrugated structure is simulated by finite element analysis. Then the reflect spectrum is calculated using an improved transfer matrix algorithm. The K-means singular value decomposition sparse dictionary is trained . In the test the spectrum with limited sample points can be obtained and the high resolution spectrum is reconstructed by solving sparse representation equation. Compared with the other conventional basis, the effect of this method is better. The match rate of the recovered spectrum and the original spectrum is over 95%.

  6. Iterative image-domain decomposition for dual-energy CT.

    PubMed

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-01

    Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but

  7. Iterative image-domain decomposition for dual-energy CT

    SciTech Connect

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the

  8. A fast, space-efficient average-case algorithm for the 'Greedy' Triangulation of a point set, and a proof that the Greedy Triangulation is not approximately optimal

    NASA Technical Reports Server (NTRS)

    Manacher, G. K.; Zobrist, A. L.

    1979-01-01

    The paper addresses the problem of how to find the Greedy Triangulation (GT) efficiently in the average case. It is noted that the problem is open whether there exists an efficient approximation algorithm to the Optimum Triangulation. It is first shown how in the worst case, the GT may be obtained in time O(n to the 3) and space O(n). Attention is then given to how the algorithm may be slightly modified to produce a time O(n to the 2), space O(n) solution in the average case. Finally, it is mentioned that Gilbert has found a worst case solution using totally different techniques that require space O(n to the 2) and time O(n to the 2 log n).

  9. Single-channel and multi-channel orthogonal matching pursuit for seismic trace decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Xuan; Zhang, Xuebing; Liu, Cai; Lu, Qi

    2017-02-01

    The conventional matching pursuit (MP) algorithm can decompose a 1D signal into a set of wavelet atoms adaptively. As to reflection seismic data, some applicable algorithms based on the MP decomposition has been developed, such as single-channel matching pursuit (SCMP) and multi-channel matching pursuit (MCMP). However, these algorithms cannot always select the optimal atoms, which results in less meaningful decompositions. To overcome this limitation, we introduce the idea of orthogonal matching pursuit into a multi-channel decomposition scheme, which we refer to as the multi-channel orthogonal matching pursuit (MCOMP). Each iteration of the proposed MCOMP might extract a more reasonable atom among a redundant Morlet wavelet dictionary, like the MCMP decomposition does, and estimate the corresponding amplitude more accurately by solving a least-squares problem. In order to correspond to SCMP, we also simplified the MCOMP decomposition to single-channel orthogonal matching pursuit (SCOMP) for decompositions of an individual seismic trace. We tested the proposed SCOMP algorithm on a synthetic signal and a field seismic trace. Then a field marine dataset example showed relative high resolution of the proposed MCOMP method with applications to the detection of low-frequency anomalies. These application examples all demonstrate more meaningful decomposition results and relative high convergence speed of the proposed algorithms.

  10. Separable States with Unique Decompositions

    NASA Astrophysics Data System (ADS)

    Ha, Kil-Chan; Kye, Seung-Hyeok

    2014-05-01

    We search for faces of the convex set consisting of all separable states, which are affinely isomorphic to simplices, to get separable states with unique decompositions. In the two-qutrit case, we found that six product vectors spanning a five dimensional space give rise to a face isomorphic to the 5-dimensional simplex with six vertices, under a suitable linear independence assumption. If the partial conjugates of six product vectors also span a 5-dimensional space, then this face is inscribed in the face for PPT states whose boundary shares the fifteen 3-simplices on the boundary of the 5-simplex. The remaining boundary points consist of PPT entangled edge states of rank four. We also show that every edge state of rank four arises in this way. If the partial conjugates of the above six product vectors span a 6-dimensional space then we have a face isomorphic to 5-simplex, whose interior consists of separable states with unique decompositions, but with non-symmetric ranks. We also construct a face isomorphic to the 9-simplex. As applications, we give answers to questions in the literature Chen and Djoković (J Math Phys 54:022201, 2013) and Chen and Djoković (Commun Math Phys 323:241-284, 2013), and construct 3 ⊗ 3PPT states of type (9,5). For the qubit-qudit cases with d ≥ 3, we also show that ( d + 1)-dimensional subspaces give rise to faces isomorphic to the d-simplices, in most cases.

  11. Algorithms for propagating uncertainty across heterogeneous domains

    SciTech Connect

    Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.

    2015-12-30

    We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.

  12. MAUD (Multiattribute Utility Decomposition): An Interactive Computer Program for the Structuring, Decomposition, and Recomposition of Preferences between Multiattributed Alternatives

    DTIC Science & Technology

    1981-08-01

    attribute Utility Decomposition (MAUD) within the context of Multiattribute Utility Theory ( MAUT ). In section 3.2 we introduce MAUT as part of the...A decision-theoretic rationale for the MAUD algorithms with special reference to multiattribute utility theory , as well as the programming logic and...Investigation of Preference Structure ... ............. .. 12 Notes on MAUD Operation. ....... .................... . 17 3. MULTIATTRIBUTE UTILITY THEORY

  13. Low complexity interference alignment algorithms for desired signal power maximization problem of MIMO channels

    NASA Astrophysics Data System (ADS)

    Sun, Cong; Yang, Yunchuan; Yuan, Yaxiang

    2012-12-01

    In this article, we investigate the interference alignment (IA) solution for a K-user MIMO interference channel. Proper users' precoders and decoders are designed through a desired signal power maximization model with IA conditions as constraints, which forms a complex matrix optimization problem. We propose two low complexity algorithms, both of which apply the Courant penalty function technique to combine the leakage interference and the desired signal power together as the new objective function. The first proposed algorithm is the modified alternating minimization algorithm (MAMA), where each subproblem has closed-form solution with an eigenvalue decomposition. To further reduce algorithm complexity, we propose a hybrid algorithm which consists of two parts. As the first part, the algorithm iterates with Householder transformation to preserve the orthogonality of precoders and decoders. In each iteration, the matrix optimization problem is considered in a sequence of 2D subspaces, which leads to one dimensional optimization subproblems. From any initial point, this algorithm obtains precoders and decoders with low leakage interference in short time. In the second part, to exploit the advantage of MAMA, it continues to iterate to perfectly align the interference from the output point of the first part. Analysis shows that in one iteration generally both proposed two algorithms have lower computational complexity than the existed maximum signal power (MSP) algorithm, and the hybrid algorithm enjoys lower complexity than MAMA. Simulations reveal that both proposed algorithms achieve similar performances as the MSP algorithm with less executing time, and show better performances than the existed alternating minimization algorithm in terms of sum rate. Besides, from the view of convergence rate, simulation results show that the MAMA enjoys fastest speed with respect to a certain sum rate value, while hybrid algorithm converges fastest to eliminate interference.

  14. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...

    2017-03-07

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  15. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  16. Hydrogen peroxide catalytic decomposition

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2010-01-01

    Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous process preceding decomposition in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation.

  17. Mode decomposition evolution equations

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be

  18. Mode decomposition evolution equations.

    PubMed

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2012-03-01

    Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be

  19. 3D building reconstruction from ALS data using unambiguous decomposition into elementary structures

    NASA Astrophysics Data System (ADS)

    Jarząbek-Rychard, M.; Borkowski, A.

    2016-08-01

    The objective of the paper is to develop an automated method that enables for the recognition and semantic interpretation of topological building structures. The novelty of the proposed modeling approach is an unambiguous decomposition of complex objects into predefined simple parametric structures, resulting in the reconstruction of one topological unit without independent overlapping elements. The aim of a data processing chain is to generate complete polyhedral models at LOD2 with an explicit topological structure and semantic information. The algorithms are performed on 3D point clouds acquired by airborne laser scanning. The presented methodology combines data-based information reflected in an attributed roof topology graph with common knowledge about buildings stored in a library of elementary structures. In order to achieve an appropriate balance between reconstruction precision and visualization aspects, the implemented library contains a set of structure-depended soft modeling rules instead of strictly defined geometric primitives. The proposed modeling algorithm starts with roof plane extraction performed by the segmentation of building point clouds, followed by topology identification and recognition of predefined structures. We evaluate the performance of the novel procedure by the analysis of the modeling accuracy and the degree of modeling detail. The assessment according to the validation methods standardized by the International Society for Photogrammetry and Remote Sensing shows that the completeness of the algorithm is above 80%, whereas the correctness exceeds 98%.

  20. Operationalizing hippocampal volume as an enrichment biomarker for amnestic mild cognitive impairment trials: effect of algorithm, test-retest variability, and cut point on trial cost, duration, and sample size.

    PubMed

    Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J

    2014-04-01

    The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations.

  1. Hydrogen iodide decomposition

    DOEpatents

    O'Keefe, Dennis R.; Norman, John H.

    1983-01-01

    Liquid hydrogen iodide is decomposed to form hydrogen and iodine in the presence of water using a soluble catalyst. Decomposition is carried out at a temperature between about 350.degree. K. and about 525.degree. K. and at a corresponding pressure between about 25 and about 300 atmospheres in the presence of an aqueous solution which acts as a carrier for the homogeneous catalyst. Various halides of the platinum group metals, particularly Pd, Rh and Pt, are used, particularly the chlorides and iodides which exhibit good solubility. After separation of the H.sub.2, the stream from the decomposer is countercurrently extracted with nearly dry HI to remove I.sub.2. The wet phase contains most of the catalyst and is recycled directly to the decomposition step. The catalyst in the remaining almost dry HI-I.sub.2 phase is then extracted into a wet phase which is also recycled. The catalyst-free HI-I.sub.2 phase is finally distilled to separate the HI and I.sub.2. The HI is recycled to the reactor; the I.sub.2 is returned to a reactor operating in accordance with the Bunsen equation to create more HI.

  2. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  3. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  4. Comparison of Station-Keeping Algorithms for an Interior Libration Point Orbit in the Sun-Earth+Moon Elliptic Restricted Three-Body Problem

    DTIC Science & Technology

    1991-10-10

    and then later by [221 Bennett Both Danby and Bennett have numerically generated graphic depictions of the linear stability region in the p-e plane...J.M.A. Danby , "Stability of the Triangular Points in Elliptic Restricted Problem of Three Bodies," The Astronomical Journal, Volume 69, Number 2, March

  5. [The algorithm for the determination of the sufficient number of dynamic electroneurostimulation procedures based on the magnitude of individual testing voltage at the reference point].

    PubMed

    Chernysh, I M; Zilov, V G; Vasilenko, A M; Frolkov, V K

    2016-01-01

    This article was designed to present evidence of the advantages of the personified approach to the treatment of the patients presenting with arterial hypertension (AH), lumbar spinal dorsopathy (LSD), chronic obstructive pulmonary disease (COPD), and duodenal ulcer (DU) at the stage of exacerbation obtained by the measurements of testing voltage at the reference point (Utest).

  6. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  7. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    NASA Astrophysics Data System (ADS)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2016-09-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  8. Process characteristics and layout decomposition of self-aligned sextuple patterning

    NASA Astrophysics Data System (ADS)

    Kang, Weiling; Chen, Yijian

    2013-03-01

    Self-aligned sextuple patterning (SASP) is a promising technique to scale down the half pitch of IC features to sub- 10nm region. In this paper, the process characteristics and decomposition methods of both positive-tone (pSASP) and negative-tone SASP (nSASP) techniques are discussed, and a variety of decomposition rules are studied. By using a node-grouping method, nSASP layout conflicting graph can be significantly simplified. Graph searching and coloring algorithm is developed for feature/color assignment. We demonstrate that by generating assisting mandrels, nSASP layout decomposition can be degenerated into an nSADP decomposition problem. The proposed decomposition algorithm is successfully verified with several commonly used 2-D layout examples.

  9. Overlapping Community Detection based on Network Decomposition

    NASA Astrophysics Data System (ADS)

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-04-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.

  10. Overlapping Community Detection based on Network Decomposition

    PubMed Central

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-01-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms. PMID:27066904

  11. Protein domain decomposition using a graph-theoretic approach.

    PubMed

    Xu, Y; Xu, D; Gabow, H N; Gabow, H

    2000-12-01

    Automatic decomposition of a multi-domain protein into individual domains represents a highly interesting and unsolved problem. As the number of protein structures in PDB is growing at an exponential rate, there is clearly a need for more reliable and efficient methods for protein domain decomposition simply to keep the domain databases up-to-date. We present a new algorithm for solving the domain decomposition problem, using a graph-theoretic approach. We have formulated the problem as a network flow problem, in which each residue of a protein is represented as a node of the network and each residue--residue contact is represented as an edge with a particular capacity, depending on the type of the contact. A two-domain decomposition problem is solved by finding a bottleneck (or a minimum cut) of the network, which minimizes the total cross-edge capacity, using the classical Ford--Fulkerson algorithm. A multi-domain decomposition problem is solved through repeatedly solving a series of two-domain problems. The algorithm has been implemented as a computer program, called DomainParser. We have tested the program on a commonly used test set consisting of 55 proteins. The decomposition results are 78.2% in agreement with the literature on both the number of decomposed domains and the assignments of residues to each domain, which compares favorably to existing programs. On the subset of two-domain proteins (20 in number), the program assigned 96.7% of the residues correctly when we require that the number of decomposed domains is two.

  12. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    SciTech Connect

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  13. Arrhythmia ECG Noise Reduction by Ensemble Empirical Mode Decomposition

    PubMed Central

    Chang, Kang-Ming

    2010-01-01

    A novel noise filtering algorithm based on ensemble empirical mode decomposition (EEMD) is proposed to remove artifacts in electrocardiogram (ECG) traces. Three noise patterns with different power—50 Hz, EMG, and base line wander – were embedded into simulated and real ECG signals. Traditional IIR filter, Wiener filter, empirical mode decomposition (EMD) and EEMD were used to compare filtering performance. Mean square error between clean and filtered ECGs was used as filtering performance indexes. Results showed that high noise reduction is the major advantage of the EEMD based filter, especially on arrhythmia ECGs. PMID:22219702

  14. Artifact removal from EEG data with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.

  15. Resolving the sign ambiguity in the singular value decomposition.

    SciTech Connect

    Bro, Rasmus; Acar, Evrim; Kolda, Tamara Gibson

    2007-10-01

    Many modern data analysis methods involve computing a matrix singular value decomposition (SVD) or eigenvalue decomposition (EVD). Principal components analysis is the time-honored example, but more recent applications include latent semantic indexing, hypertext induced topic selection (HITS), clustering, classification, etc. Though the SVD and EVD are well-established and can be computed via state-of-the-art algorithms, it is not commonly mentioned that there is an intrinsic sign indeterminacy that can significantly impact the conclusions and interpretations drawn from their results. Here we provide a solution to the sign ambiguity problem and show how it leads to more sensible solutions.

  16. The Vector Decomposition Problem

    NASA Astrophysics Data System (ADS)

    Yoshida, Maki; Mitsunari, Shigeo; Fujiwara, Toru

    This paper introduces a new computational problem on a two-dimensional vector space, called the vector decomposition problem (VDP), which is mainly defined for designing cryptosystems using pairings on elliptic curves. We first show a relation between the VDP and the computational Diffie-Hellman problem (CDH). Specifically, we present a sufficient condition for the VDP on a two-dimensional vector space to be at least as hard as the CDH on a one-dimensional subspace. We also present a sufficient condition for the VDP with a fixed basis to have a trapdoor. We then give an example of vector spaces which satisfy both sufficient conditions and on which the CDH is assumed to be hard in previous work. In this sense, the intractability of the VDP is a reasonable assumption as that of the CDH.

  17. Erbium hydride decomposition kinetics.

    SciTech Connect

    Ferrizz, Robert Matthew

    2006-11-01

    Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.

  18. Direct Sum Decomposition of Groups

    ERIC Educational Resources Information Center

    Thaheem, A. B.

    2005-01-01

    Direct sum decomposition of Abelian groups appears in almost all textbooks on algebra for undergraduate students. This concept plays an important role in group theory. One simple example of this decomposition is obtained by using the kernel and range of a projection map on an Abelian group. The aim in this pedagogical note is to establish a direct…

  19. Direct Sum Decomposition of Groups

    ERIC Educational Resources Information Center

    Thaheem, A. B.

    2005-01-01

    Direct sum decomposition of Abelian groups appears in almost all textbooks on algebra for undergraduate students. This concept plays an important role in group theory. One simple example of this decomposition is obtained by using the kernel and range of a projection map on an Abelian group. The aim in this pedagogical note is to establish a direct…

  20. Adaptive wavelet transform algorithm for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo

    2003-11-01

    A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.