Science.gov

Sample records for point decomposition algorithm

  1. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  2. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  3. Algorithms for the Markov entropy decomposition

    NASA Astrophysics Data System (ADS)

    Ferris, Andrew J.; Poulin, David

    2013-05-01

    The Markov entropy decomposition (MED) is a recently proposed, cluster-based simulation method for finite temperature quantum systems with arbitrary geometry. In this paper, we detail numerical algorithms for performing the required steps of the MED, principally solving a minimization problem with a preconditioned Newton's algorithm, as well as how to extract global susceptibilities and thermal responses. We demonstrate the power of the method with the spin-1/2 XXZ model on the 2D square lattice, including the extraction of critical points and details of each phase. Although the method shares some qualitative similarities with exact diagonalization, we show that the MED is both more accurate and significantly more flexible.

  4. Finding corner point correspondence from wavelet decomposition of image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; LeMoigne, Jacqueline

    1997-01-01

    A time efficient algorithm for image registration between two images that differ in translation is discussed. The algorithm is based on coarse-fine strategy using wavelet decomposition of both the images. The wavelet decomposition serves two different purposes: (1) its high frequency components are used to detect feature points (corner points here) and (2) it provides coarse-to-fine structure for making the algorithm time efficient. The algorithm is based on detecting the corner points from one of the images called reference image and computing corresponding points from the other image called test image by using local correlations using 7x7 windows centered around the corner points. The corresponding points are detected at the lowest decomposition level in a search area of about 11x11 (depending on the translation) and potential points of correspondence are projected onto higher levels. In the subsequent levels the local correlations are computed in a search area of no more than 3x3 for refinement of the correspondence.

  5. Domain decomposition algorithms and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined.

  6. Conception of discrete systems decomposition algorithm using p-invariants and hypergraphs

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Ł.

    2016-09-01

    In the article author presents an idea of decomposition algorithm of discrete systems described by Petri Nets using pinvariants. Decomposition process is significant from the point of view of discrete systems design, because it allows separation of the smaller sequential parts. Proposed algorithm uses modified Martinez-Silva method as well as author's selection algorithm. The developed method is a good complement of classical decomposition algorithms using graphs and hypergraphs.

  7. Faster Algorithms on Branch and Clique Decompositions

    NASA Astrophysics Data System (ADS)

    Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin

    We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.

  8. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  9. Domain decomposition algorithms and computation fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.

  10. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  11. Avoiding spurious submovement decompositions : a globally optimal algorithm.

    SciTech Connect

    Rohrer, Brandon Robinson; Hogan, Neville

    2003-07-01

    Evidence for the existence of discrete submovements underlying continuous human movement has motivated many attempts to extract them. Although they produce visually convincing results, all of the methodologies that have been employed are prone to produce spurious decompositions. Examples of potential failures are given. A branch-and-bound algorithm for submovement extraction, capable of global nonlinear minimization (and hence capable of avoiding spurious decompositions), is developed and demonstrated.

  12. A Decomposition Framework for Image Denoising Algorithms.

    PubMed

    Ghimpeteanu, Gabriela; Batard, Thomas; Bertalmio, Marcelo; Levine, Stacey

    2016-01-01

    In this paper, we consider an image decomposition model that provides a novel framework for image denoising. The model computes the components of the image to be processed in a moving frame that encodes its local geometry (directions of gradients and level lines). Then, the strategy we develop is to denoise the components of the image in the moving frame in order to preserve its local geometry, which would have been more affected if processing the image directly. Experiments on a whole image database tested with several denoising methods show that this framework can provide better results than denoising the image directly, both in terms of Peak signal-to-noise ratio and Structural similarity index metrics.

  13. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGES

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  14. Incremental k-core decomposition: Algorithms and evaluation

    SciTech Connect

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; Wu, Kun -Lung; Catalyurek, Umit V.

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that is guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.

  15. An Algorithm for image removals and decompositions without inverse matrices

    NASA Astrophysics Data System (ADS)

    Yi, Dokkyun

    2009-03-01

    Partial Differential Equation (PDE) based methods in image processing have been actively studied in the past few years. One of the effective methods is the method based on a total variation introduced by Rudin, Oshera and Fatemi (ROF) [L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60 (1992) 259-268]. This method is a well known edge preserving model and an useful tool for image removals and decompositions. Unfortunately, this method has a nonlinear term in the equation which may yield an inaccurate numerical solution. To overcome the nonlinearity, a fixed point iteration method has been widely used. The nonlinear system based on the total variation is induced from the ROF model and the fixed point iteration method to solve the ROF model is introduced by Dobson and Vogel [D.C. Dobson, C.R. Vogel, Convergence of an iterative method for total variation denoising, SIAM J. Numer. Anal. 34 (5) (1997) 1779-1791]. However, some methods had to compute inverse matrices which led to roundoff error. To address this problem, we developed an efficient method for solving the ROF model. We make a sequence like Richardson's method by using a fixed point iteration to evade the nonlinear equation. This approach does not require the computation of inverse matrices. The main idea is to make a direction vector for reducing the error at each iteration step. In other words, we make the next iteration to reduce the error from the computed error and the direction vector. We describe that our method works well in theory. In numerical experiments, we show the results of the proposed method and compare them with the results by D. Dobson and C. Vogel and then we confirm the superiority of our method.

  16. Efficient variants of the vertex space domain decomposition algorithm

    SciTech Connect

    Chan, T.F.; Shao, J.P. . Dept. of Mathematics); Mathew, T.P. . Dept. of Mathematics)

    1994-11-01

    Several variants of the vertex space algorithm of Smith for two-dimensional elliptic problems are described. The vertex space algorithm is a domain decomposition method based on nonoverlapping subregions, in which the reduced Schur complement system on the interface is solved using a generalized block Jacobi-type preconditioner, with the blocks corresponding to the vertex space, edges, and a coarse grid. Two kinds of approximations are considered for the edge and vertex space subblocks, one based on Fourier approximation, and another based on an algebraic probing technique in which sparse approximations to these subblocks are computed. The motivation is to improve the efficiency of the algorithm without sacrificing the optimal convergence rate. Numerical and theoretical results on the performance of these algorithms, including variants of an algorithm of Bramble, Pasciak, and Schatz are presented.

  17. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    NASA Astrophysics Data System (ADS)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  18. Genetic Algorithms, Floating Point Numbers and Applications

    NASA Astrophysics Data System (ADS)

    Hardy, Yorick; Steeb, Willi-Hans; Stoop, Ruedi

    The core in most genetic algorithms is the bitwise manipulations of bit strings. We show that one can directly manipulate the bits in floating point numbers. This means the main bitwise operations in genetic algorithm mutations and crossings are directly done inside the floating point number. Thus the interval under consideration does not need to be known in advance. For applications, we consider the roots of polynomials and finding solutions of linear equations.

  19. Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm

    SciTech Connect

    Yao, Y

    2008-02-08

    Semantic graphs have become key components in analyzing complex systems such as the Internet, or biological and social networks. These types of graphs generally consist of sparsely connected clusters or 'communities' whose nodes are more densely connected to each other than to other nodes in the graph. The identification of these communities is invaluable in facilitating the visualization, understanding, and analysis of large graphs by producing subgraphs of related data whose interrelationships can be readily characterized. Unfortunately, the ability of LLNL to effectively analyze the terabytes of multisource data at its disposal has remained elusive, since existing decomposition algorithms become computationally prohibitive for graphs of this size. We have addressed this limitation by developing more efficient algorithms for discerning community structure that can effectively process massive graphs. Current algorithms for detecting community structure, such as the high quality algorithm developed by Girvan and Newman [1], are only capable of processing relatively small graphs. The cubic complexity of Girvan and Newman, for example, makes it impractical for graphs with more than approximately 10{sup 4} nodes. Our goal for this project was to develop methodologies and corresponding algorithms capable of effectively processing graphs with up to 10{sup 9} nodes. From a practical standpoint, we expect the developed scalable algorithms to help resolve a variety of operational issues associated with the productive use of semantic graphs at LLNL. During FY07, we completed a graph clustering implementation that leverages a dynamic graph transformation to more efficiently decompose large graphs. In essence, our approach dynamically transforms the graph (or subgraphs) into a tree structure consisting of biconnected components interconnected by bridge links. This isomorphism allows us to compute edge betweenness, the chief source of inefficiency in Girvan and Newman

  20. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  1. Fixed Point Implementations of Fast Kalman Algorithms.

    DTIC Science & Technology

    1983-11-01

    fined point multiply. ve &geete a meatn ’C.- nero. variance N random vector s~t) each time weAfilter is said to be 12 Scaled if udae 8(t+11t0 - 3-1* AS...nl.v by bl ’k rn.b.) 20 AST iA C T ’Cnnin to .- a , o. a ide It .,oco ea ry and Idenuty by block number) In this paper we study scaling rules and round...realized in a -fast form that uses the so-called fast Kalman gain algorithm. The algorithm for the gain is fixed point. Scaling rules and expressions for

  2. On the equivalence of a class of inverse decomposition algorithms for solving systems of linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A class of direct inverse decomposition algorithms for solving systems of linear equations is presented. Their behavior in the presence of round-off errors is analyzed. It is shown that under some mild restrictions on their implementation, the class of direct inverse decomposition algorithms presented are equivalent in terms of the error complexity measures.

  3. Parallel and serial variational inequality decomposition algorithms for multicommodity market equilibrium problems

    SciTech Connect

    Nagurney, A.; Kim, D.S.

    1989-01-01

    The authors have applied parallel and serial variational inequality (VI) diagonal decomposition algorithms to large-scale multicommodity market equilibrium problems. These decomposition algorithms resolve the VI problems into single commodity problems, which are then solved as quadratic programming problems. The algorithms are implemented on an IBM 3090-600E, and randomly generated linear and nonlinear problems with as many as 100 markets and 12 commodities are solved. The computational results demonstrate that the parallel diagonal decomposition scheme is amenable to parallelization. This is the first time that multicommodity equilibrium problems of this scale and level of generality have been solved. Furthermore, this is the first study to compare the efficiencies of parallel and serial VI decomposition algorithms. Although the authors have selected as a prototype an equilibrium problem in economics, virtually any equilibrium problem can be formulated and studied as a variational inequality problem. Hence, their results are not limited to applications in economics and operations research.

  4. Chaotic Visual Cryptosystem Using Empirical Mode Decomposition Algorithm for Clinical EEG Signals.

    PubMed

    Lin, Chin-Feng

    2016-03-01

    This paper, proposes a chaotic visual cryptosystem using an empirical mode decomposition (EMD) algorithm for clinical electroencephalography (EEG) signals. The basic design concept is to integrate two-dimensional (2D) chaos-based encryption scramblers, the EMD algorithm, and a 2D block interleaver method to achieve a robust and unpredictable visual encryption mechanism. Energy-intrinsic mode function (IMF) distribution features of the clinical EEG signal are developed for chaotic encryption parameters. The maximum and second maximum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the starting points of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. The minimum and second minimum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the security level parameters of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. Three EEG database, and seventeen clinical EEG signals were tested, and the average r and mse values are 0.0201 and 4.2626 × 10(-29), respectively, for the original and chaotically-encrypted through EMD clinical EEG signals. The chaotically-encrypted signal cannot be recovered if there is an error in the input parameters, for example, an initial point error of 0.000001 %. The encryption effects of the proposed chaotic EMD visual encryption mechanism are excellent.

  5. A study of the Invariant Subspace Decomposition Algorithm for banded symmetric matrices

    SciTech Connect

    Bischof, C.; Sun, X.; Tsao, A.; Turnbull, T.

    1994-06-01

    In this paper, we give an overview of the Invariant Subspace Decomposition Algorithm for banded symmetric matrices and describe a sequential implementation of this algorithm. Our implementation uses a specialized routine for performing banded matrix multiplication together with successive band reduction, yielding a sequential algorithm that is competitive for large problems with the LAPACK QR code in computing all of the eigenvalues and eigenvectors of a dense symmetric matrix. Performance results are given on a variety of machines.

  6. Determination of the Thermal Decomposition Products of Terephthalic Acid by Using Curie-Point Pyrolyzer

    NASA Astrophysics Data System (ADS)

    Begüm Elmas Kimyonok, A.; Ulutürk, Mehmet

    2016-04-01

    The thermal decomposition behavior of terephthalic acid (TA) was investigated by thermogravimetry/differential thermal analysis (TG/DTA) and Curie-point pyrolysis. TG/DTA analysis showed that TA is sublimed at 276°C prior to decomposition. Pyrolysis studies were carried out at various temperatures ranging from 160 to 764°C. Decomposition products were analyzed and their structures were determined by gas chromatography-mass spectrometry (GC-MS). A total of 11 degradation products were identified at 764°C, whereas no peak was observed below 445°C. Benzene, benzoic acid, and 1,1‧-biphenyl were identified as the major decomposition products, and other degradation products such as toluene, benzophenone, diphenylmethane, styrene, benzaldehyde, phenol, 9H-fluorene, and 9-phenyl 9H-fluorene were also detected. A pyrolysis mechanism was proposed based on the findings.

  7. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  8. Decomposition algorithms for stochastic programming on a computational grid.

    SciTech Connect

    Linderoth, J.; Wright, S.; Mathematics and Computer Science; Axioma Inc.

    2003-01-01

    We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems), and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample-average approximations of problems from the literature.

  9. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  10. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data

    NASA Astrophysics Data System (ADS)

    Clark, Darin P.; Badea, Cristian T.

    2014-10-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL-1), gold (0.9 mg mL-1), and gadolinium (2.9 mg mL-1) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  11. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  12. A domain decomposition algorithm for solving large elliptic problems

    SciTech Connect

    Nolan, M.P.

    1991-01-01

    AN algorithm which efficiently solves large systems of equations arising from the discretization of a single second-order elliptic partial differential equation is discussed. The global domain is partitioned into not necessarily disjoint subdomains which are traversed using the Schwarz Alternating Procedure. On each subdomain the multigrid method is used to advance the solution. The algorithm has the potential to decrease solution time when data is stored across multiple levels of a memory hierarchy. Results are presented for a virtual memory, vector multiprocessor architecture. A study of choice of inner iteration procedure and subdomain overlap is presented for a model problem, solved with two and four subdomains, sequentially and in parallel. Microtasking multiprocessing results are reported for multigrid on the Alliant FX-8 vector-multiprocessor. A convergence proof for a class of matrix splittings for the two-dimensional Helmholtz equation is given. 70 refs., 3 figs., 20 tabs.

  13. Trident: An FPGA Compiler Framework for Floating-Point Algorithms.

    SciTech Connect

    Tripp J. L.; Peterson, K. D.; Poznanovic, J. D.; Ahrens, C. M.; Gokhale, M.

    2005-01-01

    Trident is a compiler for floating point algorithms written in C, producing circuits in reconfigurable logic that exploit the parallelism available in the input description. Trident automatically extracts parallelism and pipelines loop bodies using conventional compiler optimizations and scheduling techniques. Trident also provides an open framework for experimentation, analysis, and optimization of floating point algorithms on FPGAs and the flexibility to easily integrate custom floating point libraries.

  14. Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm

    PubMed Central

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565

  15. Inferring gene regulatory networks by singular value decomposition and gravitation field algorithm.

    PubMed

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms.

  16. Cross-comparison of three electromyogram decomposition algorithms assessed with experimental and simulated data.

    PubMed

    Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A

    2015-01-01

    The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy.

  17. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  18. Algorithms to Reveal Properties of Floating-Point Arithmetic

    DTIC Science & Technology

    Two algorithms are presented in the form of Fortran subroutines. Each subroutine computes the radix and number of digits of the floating - point numbers...and whether rounding or chopping is done by the machine on which it is run. The methods are shown to work on any ’reasonable’ floating - point computer.

  19. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    SciTech Connect

    Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks.

  20. Numerical study of variational data assimilation algorithms based on decomposition methods in atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Antokhin, Pavel

    2016-11-01

    The performance of a variational data assimilation algorithm for a transport and transformation model of atmospheric chemical composition is studied numerically in the case where the emission inventories are missing while there are additional in situ indirect concentration measurements. The algorithm is based on decomposition and splitting methods with a direct solution of the data assimilation problems at the splitting stages. This design allows avoiding iterative processes and working in real-time. In numerical experiments we study the sensitivity of data assimilation to measurement data quantity and quality.

  1. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  2. Decomposition-Based Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Social Networks

    PubMed Central

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806

  3. A modified iterative closest point algorithm for shape registration

    NASA Astrophysics Data System (ADS)

    Tihonkih, Dmitrii; Makovetskii, Artyom; Kuznetsov, Vladislav

    2016-09-01

    The iterative closest point (ICP) algorithm is one of the most popular approaches to shape registration. The algorithm starts with two point clouds and an initial guess for a relative rigid-body transformation between them. Then it iteratively refines the transformation by generating pairs of corresponding points in the clouds and by minimizing a chosen error metric. In this work, we focus on accuracy of the ICP algorithm. An important stage of the ICP algorithm is the searching of nearest neighbors. We propose to utilize for this purpose geometrically similar groups of points. Groups of points of the first cloud, that have no similar groups in the second cloud, are not considered in further error minimization. To minimize errors, the class of affine transformations is used. The transformations are not rigid in contrast to the classical approach. This approach allows us to get a precise solution for transformations such as rotation, translation vector and scaling. With the help of computer simulation, the proposed method is compared with common nearest neighbor search algorithms for shape registration.

  4. Improved optimization algorithm for proximal point-based dictionary updating methods

    NASA Astrophysics Data System (ADS)

    Zhao, Changchen; Hwang, Wen-Liang; Lin, Chun-Liang; Chen, Weihai

    2016-09-01

    Proximal K-singular value decomposition (PK-SVD) is a dictionary updating algorithm that incorporates proximal point method into K-SVD. The attempt of combining proximal method and K-SVD has achieved promising result in such areas as sparse approximation, image denoising, and image compression. However, the optimization procedure of PK-SVD is complicated and, therefore, limits the algorithm in both theoretical analysis and practical use. This article proposes a simple but effective optimization approach to the formulation of PK-SVD. We cast this formulation as a fitting problem and relax the constraint on the direction of the k'th row in the sparse coefficient matrix. This relaxation strengthens the regularization effect of the proximal point. The proposed algorithm needs fewer steps to implement and further boost the performance of PK-SVD while maintaining the same computational complexity. Experimental results demonstrate that the proposed algorithm outperforms conventional algorithms in reconstruction error, recovery rate, and convergence speed for sparse approximation and achieves better results in image denoising.

  5. a Review of Point Clouds Segmentation and Classification Algorithms

    NASA Astrophysics Data System (ADS)

    Grilli, E.; Menna, F.; Remondino, F.

    2017-02-01

    Today 3D models and point clouds are very popular being currently used in several fields, shared through the internet and even accessed on mobile phones. Despite their broad availability, there is still a relevant need of methods, preferably automatic, to provide 3D data with meaningful attributes that characterize and provide significance to the objects represented in 3D. Segmentation is the process of grouping point clouds into multiple homogeneous regions with similar properties whereas classification is the step that labels these regions. The main goal of this paper is to analyse the most popular methodologies and algorithms to segment and classify 3D point clouds. Strong and weak points of the different solutions presented in literature or implemented in commercial software will be listed and shortly explained. For some algorithms, the results of the segmentation and classification is shown using real examples at different scale in the Cultural Heritage field. Finally, open issues and research topics will be discussed.

  6. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  7. An Efficient Globally Optimal Algorithm for Asymmetric Point Matching.

    PubMed

    Lian, Wei; Zhang, Lei; Yang, Ming-Hsuan

    2016-08-29

    Although the robust point matching algorithm has been demonstrated to be effective for non-rigid registration, there are several issues with the adopted deterministic annealing optimization technique. First, it is not globally optimal and regularization on the spatial transformation is needed for good matching results. Second, it tends to align the mass centers of two point sets. To address these issues, we propose a globally optimal algorithm for the robust point matching problem where each model point has a counterpart in scene set. By eliminating the transformation variables, we show that the original matching problem is reduced to a concave quadratic assignment problem where the objective function has a low rank Hessian matrix. This facilitates the use of large scale global optimization techniques. We propose a branch-and-bound algorithm based on rectangular subdivision where in each iteration, multiple rectangles are used to increase the chances of subdividing the one containing the global optimal solution. In addition, we present an efficient lower bounding scheme which has a linear assignment formulation and can be efficiently solved. Extensive experiments on synthetic and real datasets demonstrate the proposed algorithm performs favorably against the state-of-the-art methods in terms of robustness to outliers, matching accuracy, and run-time.

  8. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  9. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space

  10. Optimizing the decomposition of soil moisture time-series data using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Kulkarni, C.; Mengshoel, O. J.; Basak, A.; Schmidt, K. M.

    2015-12-01

    The task of determining near-surface volumetric water content (VWC), using commonly available dielectric sensors (based upon capacitance or frequency domain technology), is made challenging due to the presence of "noise" such as temperature-driven diurnal variations in the recorded data. We analyzed a post-wildfire rainfall and runoff monitoring dataset for hazard studies in Southern California. VWC was measured with EC-5 sensors manufactured by Decagon Devices. Many traditional signal smoothing techniques such as moving averages, splines, and Loess smoothing exist. Unfortunately, when applied to our post-wildfire dataset, these techniques diminish maxima, introduce time shifts, and diminish signal details. A promising seasonal trend-decomposition procedure based on Loess (STL) decomposes VWC time series into trend, seasonality, and remainder components. Unfortunately, STL with its default parameters produces similar results as previously mentioned smoothing methods. We propose a novel method to optimize seasonal decomposition using STL with genetic algorithms. This method successfully reduces "noise" including diurnal variations while preserving maxima, minima, and signal detail. Better decomposition results for the post-wildfire VWC dataset were achieved by optimizing STL's control parameters using genetic algorithms. The genetic algorithms minimize an additive objective function with three weighted terms: (i) root mean squared error (RMSE) of straight line relative to STL trend line; (ii) range of STL remainder; and (iii) variance of STL remainder. Our optimized STL method, combining trend and remainder, provides an improved representation of signal details by preserving maxima and minima as compared to the traditional smoothing techniques for the post-wildfire rainfall and runoff monitoring data. This method identifies short- and long-term VWC seasonality and provides trend and remainder data suitable for forecasting VWC in response to precipitation.

  11. Communication: Active space decomposition with multiple sites: Density matrix renormalization group algorithm

    SciTech Connect

    Parker, Shane M.; Shiozaki, Toru

    2014-12-07

    We extend the active space decomposition method, recently developed by us, to more than two active sites using the density matrix renormalization group algorithm. The fragment wave functions are described by complete or restricted active-space wave functions. Numerical results are shown on a benzene pentamer and a perylene diimide trimer. It is found that the truncation errors in our method decrease almost exponentially with respect to the number of renormalization states M, allowing for numerically exact calculations (to a few μE{sub h} or less) with M = 128 in both cases. This rapid convergence is because the renormalization steps are used only for the interfragment electron correlation.

  12. Optimum and Heuristic Algorithms for Finite State Machine Decomposition and Partitioning

    DTIC Science & Technology

    1989-09-01

    The codes 3 . If thle uniut iple-vaited iniput part of a prinse imilllirault p E G1,. G2for ,oils and Sal caii be the Salle if and only if the codes...sub- .N nel.O)macgiuies aiid tile prototype isachinen. It canl be observed from Table 1 [81 ( 3 . D~ e Nliclmeli. H. K. BrAyton. And A. SAigi a iii...Heuristic Algorithms for Finite State Machine Decomposition and Partitioning Pravnav Ashar, Srinivas Devadas, and A. Richard Newton , T E ’,’ .,jpf~s’!i3

  13. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems

  14. Efficient detection and recognition algorithm of reference points in photogrammetry

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Liu, Gang; Zhu, Lichun; Li, Xiaofeng; Zhang, Yuhai; Shan, Siyu

    2016-04-01

    In photogrammetry, an approach of automatic detection and recognition on reference points have been proposed to meet the requirements on detection and matching of reference points. The reference points used here are the CCT(circular coded target), which compose of two parts: the round target point in central region and the circular encoding band in surrounding region. Firstly, the contours of image are extracted, after that noises and disturbances of the image are filtered out by means of a series of criteria, such as the area of the contours, the correlation coefficient between two regions of contours etc. Secondly, the cubic spline interpolation is adopted to process the central contour region of the CCT. The contours of the interpolated image are extracted again, then the least square ellipse fitting is performed to calculate the center coordinates of the CCT. Finally, the encoded value is obtained by the angle information from the circular encoding band of the CCT. From the experiment results, the location precision of the CCT can be achieved to sub-pixel level of the algorithm presented. Meanwhile the recognition accuracy is pretty high, even if the background of the image is complex and full of disturbances. In addition, the property of the algorithm is robust. Furthermore, the runtime of the algorithm is fast.

  15. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  16. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation

  17. Electrocardiogram Signal Denoising Using Extreme-Point Symmetric Mode Decomposition and Nonlocal Means

    PubMed Central

    Tian, Xiaoying; Li, Yongshuai; Zhou, Huan; Li, Xiang; Chen, Lisha; Zhang, Xuming

    2016-01-01

    Electrocardiogram (ECG) signals contain a great deal of essential information which can be utilized by physicians for the diagnosis of heart diseases. Unfortunately, ECG signals are inevitably corrupted by noise which will severely affect the accuracy of cardiovascular disease diagnosis. Existing ECG signal denoising methods based on wavelet shrinkage, empirical mode decomposition and nonlocal means (NLM) cannot provide sufficient noise reduction or well-detailed preservation, especially with high noise corruption. To address this problem, we have proposed a hybrid ECG signal denoising scheme by combining extreme-point symmetric mode decomposition (ESMD) with NLM. In the proposed method, the noisy ECG signals will first be decomposed into several intrinsic mode functions (IMFs) and adaptive global mean using ESMD. Then, the first several IMFs will be filtered by the NLM method according to the frequency of IMFs while the QRS complex detected from these IMFs as the dominant feature of the ECG signal and the remaining IMFs will be left unprocessed. The denoised IMFs and unprocessed IMFs are combined to produce the final denoised ECG signals. Experiments on both simulated ECG signals and real ECG signals from the MIT-BIH database demonstrate that the proposed method can suppress noise in ECG signals effectively while preserving the details very well, and it outperforms several state-of-the-art ECG signal denoising methods in terms of signal-to-noise ratio (SNR), root mean squared error (RMSE), percent root mean square difference (PRD) and mean opinion score (MOS) error index. PMID:27681729

  18. Algorithm for astronomical, point source, signal to noise ratio calculations

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Schroeder, D. J.

    1984-01-01

    An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.

  19. Research on Loran-C Sky Wave Delay Estimation Using Eigen-decomposition Algorithm

    NASA Astrophysics Data System (ADS)

    Xiong, W.; Hu, Y. H.; Liang, Q.

    2009-04-01

    A novel signal processing technique using the Eigenvector algorithm for estimating sky wave delays in Loran - C receiver has been presented in this paper. This provides the basis on which to design a Loran-C receiver capable of adjusting its sampling point adaptively to the optimal value. The performance of this sky wave delay on the estimation accuracy of the algorithm is studied and compared with IFFT technique. Simulation results show that this algorithm clearly provides better resolution and sharper peaks than the IFFT. Finally, experiment results using off-air data confirm these conclusions.

  20. Multimode algorithm for detection and tracking of point targets

    NASA Astrophysics Data System (ADS)

    Venkateswarlu, Ronda; Er, Meng H.; Deshpande, Suyog D.; Chan, Philip

    1999-07-01

    This paper deals with the problem of detection and tracking of point-targets from a sequence of IR images against slowly moving clouds as well as structural background. Many algorithms are reported in the literature for tracking sizeable targets with good result. However, the difficulties in tracking point-targets arise from the fact that they are not easily discernible from point like clutter. Though the point-targets are moving, it is very difficult to detect and track them with reduced false alarm rates, because of the non-stationary of the IR clutter, changing target statistics and sensor motion. The focus of research in this area is to reduce false alarm rate to an acceptable level. In certain situations not detecting a true target is acceptable, but declaring a false target as a true one may not be acceptable. Although, there are many approaches to tackle this problem, no single method works well in all the situations. In this paper, we present a multi-mode algorithm involving scene stabilization using image registration, 2D spatial filtering based on continuous wavelet transform, adaptive threshold, accumulation of the threshold frames and processing of the accumulated frame to get the final target trajectories. It is assumed that most of the targets occupy a couple of pixels. Head-on moving and maneuvering targets are not considered. It has been tested successfully with the available database and the results are presented.

  1. Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2004-01-01

    This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.

  2. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  3. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  4. Parallel algorithm for computing points on a computation front hyperplane

    NASA Astrophysics Data System (ADS)

    Krasnov, M. M.

    2015-01-01

    A parallel algorithm for computing points on a computation front hyperplane is described. This task arises in the computation of a quantity defined on a multidimensional rectangular domain. Three-dimensional domains are usually discussed, but the material is given in the general form when the number of measurements is at least two. When the values of a quantity at different points are internally independent (which is frequently the case), the corresponding computations are independent as well and can be performed in parallel. However, if there are internal dependences (as, for example, in the Gauss-Seidel method for systems of linear equations), then the order of scanning points of the domain is an important issue. A conventional approach in this case is to form a computation front hyperplane (a usual plane in the three-dimensional case and a line in the two-dimensional case) that moves linearly across the domain at a certain angle. At every step in the course of motion of this hyperplane, its intersection points with the domain can be treated independently and, hence, in parallel, but the steps themselves are executed sequentially. At different steps, the intersection of the hyperplane with the entire domain can have a rather complex geometry and the search for all points of the domain lying on the hyperplane at a given step is a nontrivial problem. This problem (i.e., the computation of the coordinates of points lying in the intersection of the domain with the hyperplane at a given step in the course of hyperplane motion) is addressed below. The computations over the points of the hyperplane can be executed in parallel.

  5. A novel algorithm for generating libration point orbits about the collinear points

    NASA Astrophysics Data System (ADS)

    Ren, Yuan; Shan, Jinjun

    2014-09-01

    This paper presents a numerical algorithm that can generate long-term libration points orbits (LPOs) and the transfer orbits from the parking orbits to the LPOs in the circular-restricted three-body problem (CR3BP) and the full solar system model without initial guesses. The families of the quasi-periodic LPOs in the CR3BP can also be constructed with this algorithm. By using the dynamical behavior of LPO, the transfer orbit from the parking orbit to the LPO is generated using a bisection method. At the same time, a short segment of the target LPO connected with the transfer orbit is obtained, then the short segment of LPO is extended by correcting the state towards its adjacent point on the stable manifold of the target LPO with differential evolution algorithm. By implementing the correction strategy repeatedly, the LPOs can be extended to any length as needed. Moreover, combining with the continuation procedure, this algorithm can be used to generate the families of the quasi-periodic LPOs in the CR3BP.

  6. A Three-level BDDC algorithm for saddle point problems

    SciTech Connect

    Tu, X.

    2008-12-10

    BDDC algorithms have previously been extended to the saddle point problems arising from mixed formulations of elliptic and incompressible Stokes problems. In these two-level BDDC algorithms, all iterates are required to be in a benign space, a subspace in which the preconditioned operators are positive definite. This requirement can lead to large coarse problems, which have to be generated and factored by a direct solver at the beginning of the computation and they can ultimately become a bottleneck. An additional level is introduced in this paper to solve the coarse problem approximately and to remove this difficulty. This three-level BDDC algorithm keeps all iterates in the benign space and the conjugate gradient methods can therefore be used to accelerate the convergence. This work is an extension of the three-level BDDC methods for standard finite element discretization of elliptic problems and the same rate of convergence is obtained for the mixed formulation of the same problems. Estimate of the condition number for this three-level BDDC methods is provided and numerical experiments are discussed.

  7. New point matching algorithm for panoramic reflectance images

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong; Zlatanova, Sisi

    2007-11-01

    Much attention is paid to registration of terrestrial point clouds nowadays. Research is carried out towards improved efficiency and automation of the registration process. The most important part of registration is finding correspondence. The panoramic reflectance images are generated according to the angular coordinates and reflectance value of each 3D point of 360° full scans. Since it is similar to a black and white photo, it is possible to implement image matching on this kind of images. Therefore, this paper reports a new corresponding point matching algorithm for panoramic reflectance images. Firstly SIFT (Scale Invariant Feature Transform) method is employed for extracting distinctive invariant features from panoramic images that can be used to perform reliable matching between different views of an object or scene. The correspondences are identified by finding the nearest neighbors of each keypoint form the first image among those in the second image afterwards. The rigid geometric invariance derived from point cloud is used to prune false correspondences. Finally, an iterative process is employed to include more new matches for transformation parameters computation until the computation accuracy reaches predefined accuracy threshold. The approach is tested with panoramic reflectance images (indoor and outdoor scenes) acquired by the laser scanner FARO LS 880. 1

  8. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  9. A maximum power point tracking algorithm for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  10. A fast algorithm based on the domain decomposition method for scattering analysis of electrically large objects

    NASA Astrophysics Data System (ADS)

    Yin, Lei; Hong, Wei

    2002-01-01

    By combining the finite difference (FD) method with the domain decomposition method (DDM), a fast and rigorous algorithm is presented in this paper for the scattering analysis of extremely large objects. Unlike conventional methods, such as the method of moments (MOM) and FD method, etc., the new algorithm decomposes an original large domain into small subdomains and chooses the most efficient method to solve the electromagnetic (EM) equations on each subdomain individually. Therefore the computational complexity and scale are substantially reduced. The iterative procedure of the algorithm and the implementation of virtual boundary conditions are discussed in detail. During scattering analysis of an electrically large cylinder, the conformal band computational domain along the circumference of the cylinder is decomposed into sections, which results in a series of band matrices with very narrow band. Compared with the traditional FD method, it decreases the consumption of computer memory and CPU time from O(N2) to O(N/m) and O(N), respectively, where m is the number of subdomains and Nis the number of nodes or unknowns. Furthermore, this method can be easily applied for the analysis of arbitrary shaped cylinders because the subdomains can be divided into any possible form. On the other hand, increasing the number of subdomains will hardly increase the computing time, which makes it possible to analyze the EM scattering problems of extremely large cylinders only on a PC. The EM scattering by two-dimensional cylinders with maximum perimeter of 100,000 wavelengths is analyzed. Moreover, this method is very suitable for parallel computation, which can further promote the computational efficiency.

  11. Nondyadic decomposition algorithm with Meyer's wavelet packets: an application to EEG signal

    NASA Astrophysics Data System (ADS)

    Carre, Philippe; Richard, Noel; Fernandez-Maloigne, Christine; Paquereau, Joel

    1999-10-01

    In this paper, we propose an original decomposition scheme based on Meyer's wavelets. In opposition to a classical technique of wavelet packet analysis, the decomposition is an adaptative segmentation of the frequential axis which does not use a filters bank. This permits a higher flexibility in the band frequency definition. The decomposition computes all possible partitions from a sequential space: it does not only compute those that come from a dyadic decomposition. Our technique is applied on the electroencephalogram signal; here the purpose is to extract a best basis of frequential decomposition. This study is part of a multimodal functional cerebral imagery project.

  12. Parallel data-driven decomposition algorithm for large-scale datasets: with application to transitional boundary layers

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Schmid, Peter J.

    2016-10-01

    Many fluid flows of engineering interest, though very complex in appearance, can be approximated by low-order models governed by a few modes, able to capture the dominant behavior (dynamics) of the system. This feature has fueled the development of various methodologies aimed at extracting dominant coherent structures from the flow. Some of the more general techniques are based on data-driven decompositions, most of which rely on performing a singular value decomposition (SVD) on a formulated snapshot (data) matrix. The amount of experimentally or numerically generated data expands as more detailed experimental measurements and increased computational resources become readily available. Consequently, the data matrix to be processed will consist of far more rows than columns, resulting in a so-called tall-and-skinny (TS) matrix. Ultimately, the SVD of such a TS data matrix can no longer be performed on a single processor, and parallel algorithms are necessary. The present study employs the parallel TSQR algorithm of (Demmel et al. in SIAM J Sci Comput 34(1):206-239, 2012), which is further used as a basis of the underlying parallel SVD. This algorithm is shown to scale well on machines with a large number of processors and, therefore, allows the decomposition of very large datasets. In addition, the simplicity of its implementation and the minimum required communication makes it suitable for integration in existing numerical solvers and data decomposition techniques. Examples that demonstrate the capabilities of highly parallel data decomposition algorithms include transitional processes in compressible boundary layers without and with induced flow separation.

  13. An Algorithm for Projecting Points onto a Patched CAD Model

    SciTech Connect

    Henshaw, W D

    2001-05-29

    We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.

  14. Experimental Design for Groundwater Pumping Estimation Using a Genetic Algorithm (GA) and Proper Orthogonal Decomposition (POD)

    NASA Astrophysics Data System (ADS)

    Siade, A. J.; Cheng, W.; Yeh, W. W.

    2010-12-01

    This study optimizes observation well locations and sampling frequencies for the purpose of estimating unknown groundwater extraction in an aquifer system. Proper orthogonal decomposition (POD) is used to reduce the groundwater flow model, thus reducing the computation burden and data storage space associated with solving this problem for heavily discretized models. This reduced model can store a significant amount of system information in a much smaller reduced state vector. Along with the sensitivity equation method, the proposed approach can efficiently compute the Jacobian matrix that forms the information matrix associated with the experimental design. The criterion adopted for experimental design is the maximization of the trace of the weighted information matrix. Under certain conditions, this is equivalent to the classical A-optimality criterion established in experimental design. A genetic algorithm (GA) is used to optimize the observation well locations and sampling frequencies for maximizing the collected information from the hydraulic head sampling at the observation wells. We applied the proposed approach to a hypothetical 30,000-node groundwater aquifer system. We studied the relationship among the number of observation wells, observation well locations, sampling frequencies, and the collected information for estimating unknown groundwater extraction.

  15. Algorithm of the automated choice of points of the acupuncture for EHF-therapy

    NASA Astrophysics Data System (ADS)

    Lyapina, E. P.; Chesnokov, I. A.; Anisimov, Ya. E.; Bushuev, N. A.; Murashov, E. P.; Eliseev, Yu. Yu.; Syuzanna, H.

    2007-05-01

    Offered algorithm of the automated choice of points of the acupuncture for EHF-therapy. The recipe formed by algorithm of an automated choice of points for acupunctural actions has a recommendational character. Clinical investigations showed that application of the developed algorithm in EHF-therapy allows to normalize energetic state of the meridians and to effectively solve many problems of an organism functioning.

  16. Asymptotic behavior of two algorithms for solving common fixed point problems

    NASA Astrophysics Data System (ADS)

    Zaslavski, Alexander J.

    2017-04-01

    The common fixed point problem is to find a common fixed point of a finite family of mappings. In the present paper our goal is to obtain its approximate solution using two perturbed algorithms. The first algorithm is an iterative method for problems in a metric space while the second one is a dynamic string-averaging algorithms for problems in a Hilbert space.

  17. A patch-based tensor decomposition algorithm for M-FISH image classification.

    PubMed

    Wang, Min; Huang, Ting-Zhu; Li, Jingyao; Wang, Yu-Ping

    2016-05-03

    Multiplex-fluorescence in situ hybridization (M-FISH) is a chromosome imaging technique which can be used to detect chromosomal abnormalities such as translocations, deletions, duplications, and inversions. Chromosome classification from M-FISH imaging data is a key step to implement the technique. In the classified M-FISH image, each pixel in a chromosome is labeled with a class index and drawn with a pseudo-color so that geneticists can easily conduct diagnosis, for example, identifying chromosomal translocations by examining color changes between chromosomes. However, the information of pixels in a neighborhood is often overlooked by existing approaches. In this work, we assume that the pixels in a patch belong to the same class and use the patch to represent the center pixel's class information, by which we can use the correlations of neighboring pixels and the structural information across different spectral channels for the classification. On the basis of assumption, we propose a patch-based classification algorithm by using higher order singular value decomposition (HOSVD). The developed method has been tested on a comprehensive M-FISH database that we established, demonstrating improved performance. When compared with other pixel-wise M-FISH image classifiers such as fuzzy c-means clustering (FCM), adaptive fuzzy c-means clustering (AFCM), improved adaptive fuzzy c-means clustering (IAFCM), and sparse representation classification (SparseRC) methods, the proposed method gave the highest correct classification ratio (CCR), which can translate into improved diagnosis of genetic diseases and cancers. © 2016 International Society for Advancement of Cytometry.

  18. Application of spectral decomposition algorithm for mapping water quality in a turbid lake (Lake Kasumigaura, Japan) from Landsat TM data

    NASA Astrophysics Data System (ADS)

    Oyama, Youichi; Matsushita, Bunkei; Fukushima, Takehiko; Matsushige, Kazuo; Imai, Akio

    The remote sensing of Case 2 water has been far less successful than that of Case 1 water, due mainly to the complex interactions among optically active substances (e.g., phytoplankton, suspended sediments, colored dissolved organic matter, and water) in the former. To address this problem, we developed a spectral decomposition algorithm (SDA), based on a spectral linear mixture modeling approach. Through a tank experiment, we found that the SDA-based models were superior to conventional empirical models (e.g. using single band, band ratio, or arithmetic calculation of band) for accurate estimates of water quality parameters. In this paper, we develop a method for applying the SDA to Landsat-5 TM data on Lake Kasumigaura, a eutrophic lake in Japan characterized by high concentrations of suspended sediment, for mapping chlorophyll-a (Chl-a) and non-phytoplankton suspended sediment (NPSS) distributions. The results show that the SDA-based estimation model can be obtained by a tank experiment. Moreover, by combining this estimation model with satellite-SRSs (standard reflectance spectra: i.e., spectral end-members) derived from bio-optical modeling, we can directly apply the model to a satellite image. The same SDA-based estimation model for Chl-a concentration was applied to two Landsat-5 TM images, one acquired in April 1994 and the other in February 2006. The average Chl-a estimation error between the two was 9.9%, a result that indicates the potential robustness of the SDA-based estimation model. The average estimation error of NPSS concentration from the 2006 Landsat-5 TM image was 15.9%. The key point for successfully applying the SDA-based estimation model to satellite data is the method used to obtain a suitable satellite-SRS for each end-member.

  19. An efficient floating-point to fixed-point conversion process for biometric algorithm on DaVinci DSP architecture

    NASA Astrophysics Data System (ADS)

    Konvalinka, Ira; Quddus, Azhar; Asraf, Daniel

    2009-05-01

    Today there is no direct path for the conversion of a floating-point algorithm implementation to an optimized fixed-point implementation. This paper proposes a novel and efficient methodology for Floating-point to Fixed-point Conversion (FFC) of biometric Fingerprint Algorithm Library (FAL) on fixed-point DaVinci processor. A general FFC research task is streamlined along smaller tasks which can be accomplished with lower effort and higher certainty. Formally specified in this paper is the optimization target in FFC, to preserve floating-point accuracy and to reduce execution time, while preserving the majority of algorithm code base. A comprehensive eight point strategy is formulated to achieve that target. Both local (focused on the most time consuming routines) and global optimization flow (to optimize across multiple routines) are used. Characteristic phases in the FFC activity are presented using data from employing the proposed FFC methodology to FAL, starting with target optimization specification, to speed optimization breakthroughs, finalized with validation of FAL accuracy after the execution time optimization. FAL implementation resulted in biometric verification time reduction for over a factor of 5, with negligible impact on accuracy. Any algorithm developer facing the task of implementing his floating-point algorithm on DaVinci DSP is expected to benefit from this presentation.

  20. LIFT: a nested decomposition algorithm for solving lower block triangular linear programs. Report AMD-859. [In PL/I for IBM 370

    SciTech Connect

    Ament, D; Ho, J; Loute, E; Remmelswaal, M

    1980-06-01

    Nested decomposition of linear programs is the result of a multilevel, hierarchical application of the Dantzig-Wolfe decomposition principle. The general structure is called lower block-triangular, and permits direct accounting of long-term effects of investment, service life, etc. LIFT, an algorithm for solving lower block triangular linear programs, is based on state-of-the-art modular LP software. The algorithmic and software aspects of LIFT are outlined, and computational results are presented. 5 figures, 6 tables. (RWR)

  1. Technical Note: MRI only prostate radiotherapy planning using the statistical decomposition algorithm

    SciTech Connect

    Siversson, Carl; Nordström, Fredrik; Nilsson, Terese; Nyholm, Tufve; Jonsson, Joakim; Gunnlaugsson, Adalsteinn; Olsson, Lars E.

    2015-10-15

    Purpose: In order to enable a magnetic resonance imaging (MRI) only workflow in radiotherapy treatment planning, methods are required for generating Hounsfield unit (HU) maps (i.e., synthetic computed tomography, sCT) for dose calculations, directly from MRI. The Statistical Decomposition Algorithm (SDA) is a method for automatically generating sCT images from a single MR image volume, based on automatic tissue classification in combination with a model trained using a multimodal template material. This study compares dose calculations between sCT generated by the SDA and conventional CT in the male pelvic region. Methods: The study comprised ten prostate cancer patients, for whom a 3D T2 weighted MRI and a conventional planning CT were acquired. For each patient, sCT images were generated from the acquired MRI using the SDA. In order to decouple the effect of variations in patient geometry between imaging modalities from the effect of uncertainties in the SDA, the conventional CT was nonrigidly registered to the MRI to assure that their geometries were well aligned. For each patient, a volumetric modulated arc therapy plan was created for the registered CT (rCT) and recalculated for both the sCT and the conventional CT. The results were evaluated using several methods, including mean average error (MAE), a set of dose-volume histogram parameters, and a restrictive gamma criterion (2% local dose/1 mm). Results: The MAE within the body contour was 36.5 ± 4.1 (1 s.d.) HU between sCT and rCT. Average mean absorbed dose difference to target was 0.0% ± 0.2% (1 s.d.) between sCT and rCT, whereas it was −0.3% ± 0.3% (1 s.d.) between CT and rCT. The average gamma pass rate was 99.9% for sCT vs rCT, whereas it was 90.3% for CT vs rCT. Conclusions: The SDA enables a highly accurate MRI only workflow in prostate radiotherapy planning. The dosimetric uncertainties originating from the SDA appear negligible and are notably lower than the uncertainties

  2. A single-point model from SO(3) decomposition of the axisymmetric mean-flow coupled two-point equations

    NASA Astrophysics Data System (ADS)

    Clark, Timothy; Rubinstein, Robert; Kurien, Susan

    2016-11-01

    The fluctuating-pressure-strain correlations present a significant challenge for engineering turbulence models. For incompressible flow, the pressure is an intrinsically two-point quantity (represented as Green's function, integrated over the field), and therefore representing the implied scale-dependence in a one-point model is difficult. The pioneering work of Launder, Reece and Rodi (1975) presented a model that satisfied the tensor symmetries and dimensional consistency with the underlying Green's function solution, and described the assumptions embedded in their one-point model. Among the constraints of such a model is its inability to capture scale-dependent anisotropic flow development. Restricting our attention to the case of axisymmetric mean-field strains, we present a one-point model of the mean-flow couplings, including the pressure-strain terms, starting from a directional (tensorially isotropic) and polarization (tensorially anisotropic and trace-free) representation of the two-point correlation equations, truncated to the lowest order terms. The model results are then compared to simulations performed using arbitrary orders of spherical harmonic functions from which the exact solution may be obtained to desired accuracy.

  3. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  4. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  5. A hardware-oriented algorithm for floating-point function generation

    NASA Technical Reports Server (NTRS)

    O'Grady, E. Pearse; Young, Baek-Kyu

    1991-01-01

    An algorithm is presented for performing accurate, high-speed, floating-point function generation for univariate functions defined at arbitrary breakpoints. Rapid identification of the breakpoint interval, which includes the input argument, is shown to be the key operation in the algorithm. A hardware implementation which makes extensive use of read/write memories is used to illustrate the algorithm.

  6. Formulation and error analysis for a generalized image point correspondence algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar

    1992-01-01

    A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.

  7. Comparison between one-point calibration and two-point calibration approaches in a continuous glucose monitoring algorithm.

    PubMed

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl; Hejlesen, Ole

    2014-07-01

    The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach.

  8. An Efficient Exact Quantum Algorithm for the Integer Square-free Decomposition Problem.

    PubMed

    Li, Jun; Peng, Xinhua; Du, Jiangfeng; Suter, Dieter

    2012-01-01

    Quantum computers are known to be qualitatively more powerful than classical computers, but so far only a small number of different algorithms have been discovered that actually use this potential. It would therefore be highly desirable to develop other types of quantum algorithms that widen the range of possible applications. Here we propose an efficient and exact quantum algorithm for finding the square-free part of a large integer - a problem for which no efficient classical algorithm exists. The algorithm relies on properties of Gauss sums and uses the quantum Fourier transform. We give an explicit quantum network for the algorithm. Our algorithm introduces new concepts and methods that have not been used in quantum information processing so far and may be applicable to a wider class of problems.

  9. Linearly convergent inexact proximal point algorithm for minimization. Revision 1

    SciTech Connect

    Zhu, C.

    1993-08-01

    In this paper, we propose a linearly convergent inexact PPA for minimization, where the inner loop stops when the relative reduction on the residue (defined as the objective value minus the optimal value) of the inner loop subproblem meets some preassigned constant. This inner loop stopping criterion can be achieved in a fixed number of iterations if the inner loop algorithm has a linear rate on the regularized subproblems. Therefore the algorithm is able to avoid the computationally expensive process of solving the inner loop subproblems exactly or asymptotically accurately; a process required by most of the other linearly convergent PPAs. As applications of this inexact PPA, we develop linearly convergent iteration schemes for minimizing functions with singular Hessian matrices, and for solving hemiquadratic extended linear-quadratic programming problems. We also prove that Correa-Lemarechal`s ``implementable form`` of PPA converges linearly under mild conditions.

  10. Preservation of quadrature Doppler signals from bidirectional slow blood flow close to the vessel wall using an adaptive decomposition algorithm.

    PubMed

    Zhang, Yufeng; Shi, Xinling; Zhang, Kexin; Chen, Jianhua

    2009-03-01

    A novel approach based on the phasing-filter (PF) technique and the empirical mode decomposition (EMD) algorithm is proposed to preserve quadrature Doppler signal components from bidirectional slow blood flow close to the vessel wall. Bidirectional mixed Doppler ultrasound signals, which were echoed from the forward and reverse moving blood and vessel wall, were initially separated to avoid the phase distortion of quadrature Doppler signals (which is induced from direct decomposition by the nonlinear EMD processing). Separated unidirectional mixed Doppler signals were decomposed into intrinsic mode functions (IMFs) using the EMD algorithm and the relevant IMFs that contribute to blood flow components were identified and summed to give the blood flow signals, whereby only the components from the bidirectional slow blood flow close to the vessel wall were retained independently. The complex quadrature Doppler blood flow signal was reconstructed from a combination of the extracted unidirectional Doppler blood flow signals. The proposed approach was applied to simulated and clinical Doppler signals. It is concluded from the experimental results that this approach is practical for the preservation of quadrature Doppler signal components from the bidirectional slow blood flow close to the vessel wall, and may provide more diagnostic information for the diagnosis and treatment of vascular diseases.

  11. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  12. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    NASA Astrophysics Data System (ADS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-03-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct.

  13. Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure

    NASA Astrophysics Data System (ADS)

    Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi

    2016-09-01

    The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising.

  14. Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure

    PubMed Central

    Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi

    2016-01-01

    The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising. PMID:27670156

  15. Multidirectional hybrid algorithm for the split common fixed point problem and application to the split common null point problem.

    PubMed

    Li, Xia; Guo, Meifang; Su, Yongfu

    2016-01-01

    In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .

  16. An efficient, robust, domain-decomposition algorithm for particle Monte Carlo

    NASA Astrophysics Data System (ADS)

    Brunner, Thomas A.; Brantley, Patrick S.

    2009-06-01

    A previously described algorithm [T.A. Brunner, T.J. Urbatsch, T.M. Evans, N.A. Gentile, Comparison of four parallel algorithms for domain decomposed implicit Monte Carlo, Journal of Computational Physics 212 (2) (2006) 527-539] for doing domain decomposed particle Monte Carlo calculations in the context of thermal radiation transport has been improved. It has been extended to support cases where the number of particles in a time step are unknown at the beginning of the time step. This situation arises when various physical processes, such as neutron transport, can generate additional particles during the time step, or when particle splitting is used for variance reduction. Additionally, several race conditions that existed in the previous algorithm and could cause code hangs have been fixed. This new algorithm is believed to be robust against all race conditions. The parallel scalability of the new algorithm remains excellent.

  17. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    NASA Technical Reports Server (NTRS)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  18. An ISAR imaging algorithm for the space satellite based on empirical mode decomposition theory

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Dong, Chun-zhu

    2014-11-01

    Currently, high resolution imaging of the space satellite is a popular topic in the field of radar technology. In contrast with regular targets, the satellite target often moves along with its trajectory and simultaneously its solar panel substrate changes the direction toward the sun to obtain energy. Aiming at the imaging problem, a signal separating and imaging approach based on the empirical mode decomposition (EMD) theory is proposed, and the approach can realize separating the signal of two parts in the satellite target, the main body and the solar panel substrate and imaging for the target. The simulation experimentation can demonstrate the validity of the proposed method.

  19. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation.

    PubMed

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-11-30

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model.

  20. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation

    PubMed Central

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  1. Bayesian Nonnegative CP Decomposition-based Feature Extraction Algorithm for Drowsiness Detection.

    PubMed

    Qian, Dong; Wang, Bei; Qing, Yun; Zhang, Tao; Zhang, Yu; Wang, Xing; Nakamura, Masatoshi

    2016-10-19

    Daytime short nap involves physiological processes, such as alertness, drowsiness and sleep. The study of the relationship between drowsiness and nap based on physiological signals is a great way to have a better understanding of the periodical rhymes of physiological states. A model of Bayesian nonnegative CP decomposition (BNCPD) was proposed to extract common multiway features from the group-level electroencephalogram (EEG) signals. As an extension of the nonnegative CP decomposition, the BNCPD model involves prior distributions of factor matrices, while the underlying CP rank could be determined automatically based on a Bayesian nonparametric approach. In terms of computational speed, variational inference was applied to approximate the posterior distributions of unknowns. Extensive simulations on the synthetic data illustrated the capability of our model to recover the true CP rank. As a real-world application, the performance of drowsiness detection during daytime short nap by using the BNCPD-based features was compared with that of other traditional feature extraction methods. Experimental results indicated that the BNCPD model outperformed other methods for feature extraction in terms of two evaluation metrics, as well as different parameter settings. Our approach is likely to be a useful tool for automatic CP rank determination and offering a plausible multiway physiological information of individual states.

  2. Iterative most-likely point registration (IMLP): a robust algorithm for computing optimal shape alignment.

    PubMed

    Billings, Seth D; Boctor, Emad M; Taylor, Russell H

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP's probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes.

  3. Parameter Space of Fixed Points of the Damped Driven Pendulum Susceptible to Control of Chaos Algorithms

    NASA Astrophysics Data System (ADS)

    Dittmore, Andrew; Trail, Collin; Olsen, Thomas; Wiener, Richard J.

    2003-11-01

    We have previously demonstrated the experimental control of chaos in a Modified Taylor-Couette system with hourglass geometry( Richard J. Wiener et al), Phys. Rev. Lett. 83, 2340 (1999).. Identifying fixed points susceptible to algorithms for the control of chaos is key. We seek to learn about this process in the accessible numerical model of the damped, driven pendulum. Following Baker(Gregory L. Baker, Am. J. Phys. 63), 832 (1995)., we seek points susceptible to the OGY(E. Ott, C. Grebogi, and J. A. Yorke, Phys. Rev. Lett. 64), 1196 (1990). algorithm. We automate the search for fixed points that are candidates for control. We present comparisons of the space of candidate fixed points with the bifurcation diagrams and Poincare sections of the system. We demonstrate control at fixed points which do not appear on the attractor. We also show that the control algorithm may be employed to shift the system between non-communicating branches of the attractor.

  4. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  5. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature.

  6. Double-patterning decomposition, design compliance, and verification algorithms at 32nm hp

    NASA Astrophysics Data System (ADS)

    Tritchkov, Alexander; Glotov, Petr; Komirenko, Sergiy; Sahouria, Emile; Torres, Andres; Seoud, Ahmed; Wiaux, Vincent

    2008-10-01

    Double patterning (DP) technology is one of the main candidates for RET of critical layers at 32nm hp. DP technology is a strong RET technique that must be considered throughout the IC design and post tapeout flows. We present a complete DP technology strategy including a DRC/DFM component, physical synthesis support and mask synthesis. In particular, the methodology contains: - A DRC-like layout DP compliance and design verification functions; - A parameterization scheme that codifies manufacturing knowledge and capability; - Judicious use of physical effect simulation to improve double-patterning quality; - An efficient, high capacity mask synthesis function for post-tapeout processing; - A verification function to determine the correctness and qualify of a DP solution; Double patterning technology requires decomposition of the design to relax the pitch and effectively allows processing with k1 factors smaller than the theoretical Rayleigh limit of 0.25. The traditional DP processes Litho-Etch-Litho- Etch (LELE) [1] requires an additional develop and etch step, which eliminates the resolution degradation which occurs in multiple exposure processed in the same resist layer. The theoretical k1 for a double-patterning technology applied to a 32nm half-pitch design using a 1.35NA 193nm imaging system is 0.44, whereas the k1 for a single-patterning of this same design would be 0.22 [2], which is sub-resolution. This paper demonstrates the methods developed at Mentor Graphics for double patterning design compliance and decomposition in an effort to minimize the impact of mask-to-mask registration and process variance. It also demonstrates verification solution implementation in the chip design flow and post-tapeout flow.

  7. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  8. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  9. A Jitter-Mitigating High Gain Antenna Pointing Algorithm for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia; Blaurock, Carl

    2007-01-01

    This paper details a High Gain Antenna (HGA) pointing algorithm which mitigates jitter during the motion of the antennas on the Solar Dynamics Observatory (SDO) spacecraft. SDO has two HGAs which point towards the Earth and send data to a ground station at a high rate. These antennas are required to track the ground station during the spacecraft Inertial and Science modes, which include periods of inertial Sunpointing as well as calibration slews. The HGAs also experience handoff seasons, where the antennas trade off between pointing at the ground station and pointing away from the Earth. The science instruments on SDO require fine Sun pointing and have a very low jitter tolerance. Analysis showed that the nominal tracking and slewing motions of the antennas cause enough jitter to exceed the HGA portion of the jitter budget. The HGA pointing control algorithm was expanded from its original form as a means to mitigate the jitter.

  10. Infrared point target detection based on exponentially weighted RLS algorithm and dual solution improvement

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Fan, Xiang; Ma, Dong-hui; Cheng, Zheng-dong

    2009-07-01

    The desire to maximize target detection range focuses attention on algorithms for detecting and tracking point targets. However, point target detection and tracking is a challenging task for two difficulties: the one is targets occupying only a few pixels or less in the complex noise and background clutter; the other is the requirement of computational load for real-time applications. Temporal signal processing algorithms offer superior clutter rejection to that of the standard spatial processing approaches. In this paper, the traditional single frame algorithm based on the background prediction is improved to consecutive multi-frames exponentially weighted recursive least squared (EWRLS) algorithm. Farther, the dual solution of EWRLS (DEWLS) is deduced to reduce the computational burden. DEWLS algorithm only uses the inner product of the points pair in training set. The predict result is given directly without compute any middle variable. Experimental results show that the RLS filter can largely increase the signal to noise ratio (SNR) of images; it has the best detection performance than other mentioned algorithms; moving targets can be detected within 2 or 3 frames with lower false alarm. Moreover, whit the dual solution improvement, the computational efficiency is enhanced over 41% to the EWRLS algorithm.

  11. A multiple wavelength algorithm in color image analysis and its applications in stain decomposition in microscopy images.

    PubMed

    Zhou, R; Hammond, E H; Parker, D L

    1996-12-01

    Stains have been used in optical microscopy to visualize the distribution and intensity of substances to which they are attached. Quantitative measures of optical density in the microscopic images can in principle be used to determine the amount of the stain. When multiple dyes are used to simultaneously visualize several substances to which they are specifically attached, quantification of each stain cannot be made using any single wavelength because attenuation from the several stain components contributes to the total optical density. Although various dyes used as optical stains are perceived as specific colors, they, in fact, have complex attenuation spectra. In this paper, we present a technique for multiple wavelength image acquisition and spectral decomposition based upon the Lambert-Beer absorption law. This algorithm is implemented based on the different spectral properties of the various stain components. By using images captured at N wavelengths, N components with different colors can be separated. This algorithm is applied to microscopy images of doubly and triply labeled prostate tissue sections. Possible applications are discussed.

  12. The removal of wall components in Doppler ultrasound signals by using the empirical mode decomposition algorithm.

    PubMed

    Zhang, Yufeng; Gao, Yali; Wang, Le; Chen, Jianhua; Shi, Xinling

    2007-09-01

    Doppler ultrasound systems, used for the noninvasive detection of the vascular diseases, normally employ a high-pass filter (HPF) to remove the large, low-frequency components from the vessel wall from the blood flow signal. Unfortunately, the filter also removes the low-frequency Doppler signals arising from slow-moving blood. In this paper, we propose to use a novel technique, called the empirical mode decomposition (EMD), to remove the wall components from the mixed signals. The EMD is firstly to decompose a signal into a finite and usually small number of individual components named intrinsic mode functions (IMFs). Then a strategy based on the ratios between two adjacent values of the wall-to-blood signal ratio (WBSR) has been developed to automatically identify and remove the relevant IMFs that contribute to the wall components. This method is applied to process the simulated and clinical Doppler ultrasound signals. Compared with the results based on the traditional high-pass filter, the new approach obtains improved performance for wall components removal from the mixed signals effectively and objectively, and provides us with more accurate low blood flow.

  13. A classification algorithm based on Cloude decomposition model for fully polarimetric SAR image

    NASA Astrophysics Data System (ADS)

    Xiang, Hongmao; Liu, Shanwei; Zhuang, Ziqi; Zhang, Naixin

    2016-11-01

    Remote sensing is an important technology for monitoring coastal zone, but it is difficult to get effective optical data in cloudy or rainy weather. SAR is an important data source for monitoring the coastal zone because it cannot be restricted in all-weather. Fully polarimetric SAR data is more abundant than single polarization and multi-polarization SAR data. The experiment selected the fully polarimetric SAR image of Radarsat-2, which covered the Yellow River Estuary. In view of the features of the study area, we carried out the H/ α unsupervised classification, the H/ α -Wishart unsupervised classification and the H/ α -Wishart unsupervised classification based on the results of Cloude decomposition. A new classification method is proposed which used the Wishart supervised classification based on the result of H/ α -Wishart unsupervised classification. The experimental results showed that the new method effectively overcome the shortcoming of unsupervised classification and improved the classification accuracy significantly. It was also shown that the classification result of SAR image had the similar precision with that of Landsat-7 image by the same classification method, SAR image had a better precision of water classification due to its sensitivity for water, and Landsat-7 image had a better precision of vegetation types.

  14. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  15. Performance Evaluation of Different Ground Filtering Algorithms for Uav-Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Serifoglu, C.; Gungor, O.; Yilmaz, V.

    2016-06-01

    Digital Elevation Model (DEM) generation is one of the leading application areas in geomatics. Since a DEM represents the bare earth surface, the very first step of generating a DEM is to separate the ground and non-ground points, which is called ground filtering. Once the point cloud is filtered, the ground points are interpolated to generate the DEM. LiDAR (Light Detection and Ranging) point clouds have been used in many applications thanks to their success in representing the objects they belong to. Hence, in the literature, various ground filtering algorithms have been reported to filter the LiDAR data. Since the LiDAR data acquisition is still a costly process, using point clouds generated from the UAV images to produce DEMs is a reasonable alternative. In this study, point clouds with three different densities were generated from the aerial photos taken from a UAV (Unmanned Aerial Vehicle) to examine the effect of point density on filtering performance. The point clouds were then filtered by means of five different ground filtering algorithms as Progressive Morphological 1D (PM1D), Progressive Morphological 2D (PM2D), Maximum Local Slope (MLS), Elevation Threshold with Expand Window (ETEW) and Adaptive TIN (ATIN). The filtering performance of each algorithm was investigated qualitatively and quantitatively. The results indicated that the ATIN and PM2D algorithms showed the best overall ground filtering performances. The MLS and ETEW algorithms were found as the least successful ones. It was concluded that the point clouds generated from the UAVs can be a good alternative for LiDAR data.

  16. A new algorithm for computing multivariate Gauss-like quadrature points.

    SciTech Connect

    Taylor, Mark A.; Bos, Len P.; Wingate, Beth A.

    2004-06-01

    The diagonal-mass-matrix spectral element method has proven very successful in geophysical applications dominated by wave propagation. For these problems, the ability to run fully explicit time stepping schemes at relatively high order makes the method more competitive then finite element methods which require the inversion of a mass matrix. The method relies on Gauss-Lobatto points to be successful, since the grid points used are required to produce well conditioned polynomial interpolants, and be high quality 'Gauss-like' quadrature points that exactly integrate a space of polynomials of higher dimension than the number of quadrature points. These two requirements have traditionally limited the diagonal-mass-matrix spectral element method to use square or quadrilateral elements, where tensor products of Gauss-Lobatto points can be used. In non-tensor product domains such as the triangle, both optimal interpolation points and Gauss-like quadrature points are difficult to construct and there are few analytic results. To extend the diagonal-mass-matrix spectral element method to (for example) triangular elements, one must find appropriate points numerically. One successful approach has been to perform numerical searches for high quality interpolation points, as measured by the Lebesgue constant (Such as minimum energy electrostatic points and Fekete points). However, these points typically do not have any Gauss-like quadrature properties. In this work, we describe a new numerical method to look for Gauss-like quadrature points in the triangle, based on a previous algorithm for computing Fekete points. Performing a brute force search for such points is extremely difficult. A common strategy to increase the numerical efficiency of these searches is to reduce the number of unknowns by imposing symmetry conditions on the quadrature points. Motivated by spectral element methods, we propose a different way to reduce the number of unknowns: We look for quadrature formula

  17. Parallel Decomposition of the Fictitious Lagrangian Algorithm and its Accuracy for Molecular Dynamics Simulations of Semiconductors.

    NASA Astrophysics Data System (ADS)

    Yeh, Mei-Ling

    We have performed a parallel decomposition of the fictitious Lagrangian method for molecular dynamics with tight-binding total energy expression into the hypercube computer. This is the first time in literature that the dynamical simulation of semiconducting systems containing more than 512 silicon atoms has become possible with the electrons treated as quantum particles. With the utilization of the Intel Paragon system, our timing analysis predicts that our code is expected to perform realistic simulations on very large systems consisting of thousands of atoms with time requirements of the order of tens of hours. Timing results and performance analysis of our parallel code are presented in terms of calculation time, communication time, and setup time. The accuracy of the fictitious Lagrangian method in molecular dynamics simulation is also investigated, especially the energy conservation of the total energy of ions. We find that the accuracy of the fictitious Lagrangian scheme in small silicon cluster and very large silicon system simulations is good for as long as the simulations proceed, even though we quench the electronic coordinates to the Born-Oppenheimer surface only in the beginning of the run. The kinetic energy of electrons does not increase as time goes on, and the energy conservation of the ionic subsystem remains very good. This means that, as far as the ionic subsystem is concerned, the electrons are on the average in the true quantum ground states. We also tie up some odds and ends regarding a few remaining questions about the fictitious Lagrangian method, such as the difference between the results obtained from the Gram-Schmidt and SHAKE method of orthonormalization, and differences between simulations where the electrons are quenched to the Born -Oppenheimer surface only once compared with periodic quenching.

  18. Construction of point process adaptive filter algorithms for neural systems using sequential Monte Carlo methods.

    PubMed

    Ergün, Ayla; Barbieri, Riccardo; Eden, Uri T; Wilson, Matthew A; Brown, Emery N

    2007-03-01

    The stochastic state point process filter (SSPPF) and steepest descent point process filter (SDPPF) are adaptive filter algorithms for state estimation from point process observations that have been used to track neural receptive field plasticity and to decode the representations of biological signals in ensemble neural spiking activity. The SSPPF and SDPPF are constructed using, respectively, Gaussian and steepest descent approximations to the standard Bayes and Chapman-Kolmogorov (BCK) system of filter equations. To extend these approaches for constructing point process adaptive filters, we develop sequential Monte Carlo (SMC) approximations to the BCK equations in which the SSPPF and SDPPF serve as the proposal densities. We term the two new SMC point process filters SMC-PPFs and SMC-PPFD, respectively. We illustrate the new filter algorithms by decoding the wind stimulus magnitude from simulated neural spiking activity in the cricket cercal system. The SMC-PPFs and SMC-PPFD provide more accurate state estimates at low number of particles than a conventional bootstrap SMC filter algorithm in which the state transition probability density is the proposal density. We also use the SMC-PPFs algorithm to track the temporal evolution of a spatial receptive field of a rat hippocampal neuron recorded while the animal foraged in an open environment. Our results suggest an approach for constructing point process adaptive filters using SMC methods.

  19. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform

    PubMed Central

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  20. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  1. Change Detection from differential airborne LiDAR using a weighted Anisotropic Iterative Closest Point Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Kusari, A.; Glennie, C. L.; Oskin, M. E.; Hinojosa-Corona, A.; Borsa, A. A.; Arrowsmith, R.

    2013-12-01

    Differential LiDAR (Light Detection and Ranging) from repeated surveys has recently emerged as an effective tool to measure three-dimensional (3D) change for applications such as quantifying slip and spatially distributed warping associated with earthquake ruptures, and examining the spatial distribution of beach erosion after hurricane impact. Currently, the primary method for determining 3D change is through the use of the iterative closest point (ICP) algorithm and its variants. However, all current studies using ICP have assumed that all LiDAR points in the compared point clouds have uniform accuracy. This assumption is simplistic given that the error for each LiDAR point is variable, and dependent upon highly variable factors such as target range, angle of incidence, and aircraft trajectory accuracy. Therefore, to rigorously determine spatial change, it would be ideal to model the random error for every LiDAR observation in the differential point cloud, and use these error estimates as apriori weights in the ICP algorithm. To test this approach, we implemented a rigorous LiDAR observation error propagation method to generate estimated random error for each point in a LiDAR point cloud, and then determine 3D displacements between two point clouds using an anistropic weighted ICP algorithm. The algorithm was evaluated by qualitatively and quantitatively comparing post earthquake slip estimates from the 2010 El Mayor-Cucapah Earthquake between a uniform weight and anistropically weighted ICP algorithm, using pre-event LiDAR collected in 2006 by Instituto Nacional de Estadística y Geografía (INEGI), and post-event LiDAR collected by The National Center for Airborne Laser Mapping (NCALM).

  2. Dynamics of G-band bright points derived using two fully automated algorithms

    NASA Astrophysics Data System (ADS)

    Bodnárová, M.; Utz, D.; Rybák, J.; Hanslmeier, A.

    Small-scale magnetic field concentrations (˜ 1 kG) in the solar photosphere can be identified in the G-band of the solar spectrum as bright points. Study of the G-band bright points (GBPs) dynamics can help us in solving several questions related also to the coronal heating problem. Here a set of 142 G-band speckled images obtained using the Dutch Open Telescope (DOT) on October 19, 2005 are used to compare identification of the GBPs by two different fully automated identification algorithms: an algorithm developed by Utz et al. (2009a, 2009b) and an algorithm developed according to papers of Berger et al. (1995, 1998). Temporal and spatial tracking of the GBPs identified by both algorithms was performed resulting in distributions of lifetimes, sizes and velocities of the GBPs. The obtained results show that both algorithms give very similar values in the case of lifetime and velocity estimation of the GBPs, but they differ significantly in case of estimation of the GBPs sizes. This difference is caused by the fact that we have applied no additional exclusive criteria on the GBPs identified by the algorithm based on the work of Berger et al. (1995, 1998). Therefore we conclude that in a future study of the GBPs dynamics we will prefer to use the Utz's algorithm to perform identification and tracking of the GBPs in G-band images.

  3. Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm

    NASA Astrophysics Data System (ADS)

    Kaczałek, B.; Borkowski, A.

    2016-06-01

    The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.

  4. Peak load demand forecasting using two-level discrete wavelet decomposition and neural network algorithm

    NASA Astrophysics Data System (ADS)

    Bunnoon, Pituk; Chalermyanont, Kusumal; Limsakul, Chusak

    2010-02-01

    This paper proposed the discrete transform and neural network algorithms to obtain the monthly peak load demand in mid term load forecasting. The mother wavelet daubechies2 (db2) is employed to decomposed, high pass filter and low pass filter signals from the original signal before using feed forward back propagation neural network to determine the forecasting results. The historical data records in 1997-2007 of Electricity Generating Authority of Thailand (EGAT) is used as reference. In this study, historical information of peak load demand(MW), mean temperature(Tmean), consumer price index (CPI), and industrial index (economic:IDI) are used as feature inputs of the network. The experimental results show that the Mean Absolute Percentage Error (MAPE) is approximately 4.32%. This forecasting results can be used for fuel planning and unit commitment of the power system in the future.

  5. Experimental infrared point-source detection using an iterative generalized likelihood ratio test algorithm.

    PubMed

    Nichols, J M; Waterman, J R

    2017-03-01

    This work documents the performance of a recently proposed generalized likelihood ratio test (GLRT) algorithm in detecting thermal point-source targets against a sky background. A calibrated source is placed above the horizon at various ranges and then imaged using a mid-wave infrared camera. The proposed algorithm combines a so-called "shrinkage" estimator of the background covariance matrix and an iterative maximum likelihood estimator of the point-source parameters to produce the GLRT statistic. It is clearly shown that the proposed approach results in better detection performance than either standard energy detection or previous implementations of the GLRT detector.

  6. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization.

  7. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration

    NASA Astrophysics Data System (ADS)

    Chen, Peijun; Huang, Jianguo; Zhang, Xiaoqun

    2013-02-01

    Recently, the minimization of a sum of two convex functions has received considerable interest in a variational image restoration model. In this paper, we propose a general algorithmic framework for solving a separable convex minimization problem from the point of view of fixed point algorithms based on proximity operators (Moreau 1962 C. R. Acad. Sci., Paris I 255 2897-99). Motivated by proximal forward-backward splitting proposed in Combettes and Wajs (2005 Multiscale Model. Simul. 4 1168-200) and fixed point algorithms based on the proximity operator (FP2O) for image denoising (Micchelli et al 2011 Inverse Problems 27 45009-38), we design a primal-dual fixed point algorithm based on the proximity operator (PDFP2Oκ for κ ∈ [0, 1)) and obtain a scheme with a closed-form solution for each iteration. Using the firmly nonexpansive properties of the proximity operator and with the help of a special norm over a product space, we achieve the convergence of the proposed PDFP2Oκ algorithm. Moreover, under some stronger assumptions, we can prove the global linear convergence of the proposed algorithm. We also give the connection of the proposed algorithm with other existing first-order methods. Finally, we illustrate the efficiency of PDFP2Oκ through some numerical examples on image supper-resolution, computerized tomographic reconstruction and parallel magnetic resonance imaging. Generally speaking, our method PDFP2O (κ = 0) is comparable with other state-of-the-art methods in numerical performance, while it has some advantages on parameter selection in real applications.

  8. Modified Cholesky factorizations in interior-point algorithms for linear programming.

    SciTech Connect

    Wright, S.; Mathematics and Computer Science

    1999-01-01

    We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.

  9. A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications

    NASA Astrophysics Data System (ADS)

    Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.

    2012-08-01

    The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.

  10. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  11. Photoacoustic tomography from weak and noisy signals by using a pulse decomposition algorithm in the time-domain.

    PubMed

    Liu, Liangbing; Tao, Chao; Liu, XiaoJun; Deng, Mingxi; Wang, Senhua; Liu, Jun

    2015-10-19

    Photoacoustic tomography is a promising and rapidly developed methodology of biomedical imaging. It confronts an increasing urgent problem to reconstruct the image from weak and noisy photoacoustic signals, owing to its high benefit in extending the imaging depth and decreasing the dose of laser exposure. Based on the time-domain characteristics of photoacoustic signals, a pulse decomposition algorithm is proposed to reconstruct a photoacoustic image from signals with low signal-to-noise ratio. In this method, a photoacoustic signal is decomposed as the weighted summation of a set of pulses in the time-domain. Images are reconstructed from the weight factors, which are directly related to the optical absorption coefficient. Both simulation and experiment are conducted to test the performance of the method. Numerical simulations show that when the signal-to-noise ratio is -4 dB, the proposed method decreases the reconstruction error to about 17%, in comparison with the conventional back-projection method. Moreover, it can produce acceptable images even when the signal-to-noise ratio is decreased to -10 dB. Experiments show that, when the laser influence level is low, the proposed method achieves a relatively clean image of a hair phantom with some well preserved pattern details. The proposed method demonstrates imaging potential of photoacoustic tomography in expanding applications.

  12. Point process algorithm: a new Bayesian approach for TPF-I planet signal extraction

    NASA Technical Reports Server (NTRS)

    Velusamy, T.; Marsh, K. A.; Ware, B.

    2005-01-01

    TPF-I capability for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm, a Bayesian technique for estracting planetary signals using the sine/cosine chopped outputs of a dual nulling interferometer.

  13. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm

    PubMed Central

    Mora-Pascual, Jerónimo M.; García-García, Alberto; Martínez-González, Pablo

    2016-01-01

    The Iterative Closest Point (ICP) algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results. PMID:27768714

  14. A superlinear infeasible-interior-point algorithm for monotone complementarity problems

    SciTech Connect

    Wright, S.; Ralph, D.

    1996-11-01

    We use the globally convergent framework proposed by Kojima, Noma, and Yoshise to construct an infeasible-interior-point algorithm for monotone nonlinear complementary problems. Superlinear convergence is attained when the solution is nondegenerate and also when the problem is linear. Numerical experiments confirm the efficacy of the proposed approach.

  15. Redistricting in a GIS environment: An optimisation algorithm using switching-points

    NASA Astrophysics Data System (ADS)

    Macmillan, W.

    This paper gives details of an algorithm whose purpose is to partition a set of populated zones into contiguous regions in order to minimise the difference in population size between the regions. The algorithm, known as SARA, uses simulated annealing and a new method for checking the contiguity of regions. It is the latter which allows the algorithm to be used to tackle large problems with modest computing resources. The paper describes the new contiguity checking procedure, based on the concept of switching points, and compares it with the connectivity method developed by Openshaw and Rao [1]. It goes on to give a detailed description of the algorithm, then concludes with a brief discussion of possible extensions to accommodate additional zone-design criteria.

  16. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  17. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  18. Optimizing the Point-In-Box Search Algorithm for the Cray Y-MP(TM) Supercomputer

    SciTech Connect

    Attaway, S.W.; Davis, M.E.; Heinstein, M.W.; Swegle, J.S.

    1998-12-23

    Determining the subset of points (particles) in a problem domain that are contained within certain spatial regions of interest can be one of the most time-consuming parts of some computer simulations. Examples where this 'point-in-box' search can dominate the computation time include (1) finite element contact problems; (2) molecular dynamics simulations; and (3) interactions between particles in numerical methods, such as discrete particle methods or smooth particle hydrodynamics. This paper describes methods to optimize a point-in-box search algorithm developed by Swegle that make optimal use of the architectural features of the Cray Y-MP Supercomputer.

  19. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  20. An affine point-set and line invariant algorithm for photo-identification of gray whales

    NASA Astrophysics Data System (ADS)

    Chandan, Chandan; Kehtarnavaz, Nasser; Hillman, Gilbert; Wursig, Bernd

    2004-05-01

    This paper presents an affine point-set and line invariant algorithm within a statistical framework, and its application to photo-identification of gray whales (Eschrichtius robustus). White patches (blotches) appearing on a gray whale's left and right flukes (the flattened broad paddle-like tail) constitute unique identifying features and have been used here for individual identification. The fluke area is extracted from a fluke image via the live-wire edge detection algorithm, followed by optimal thresholding of the fluke area to obtain the blotches. Affine point-set and line invariants of the blotch points are extracted based on three reference points, namely the left and right tips and the middle notch-like point on the fluke. A set of statistics is derived from the invariant values and used as the feature vector representing a database image. The database images are then ranked depending on the degree of similarity between a query and database feature vectors. The results show that the use of this algorithm leads to a reduction in the amount of manual search that is normally done by marine biologists.

  1. Optimal Parameter Exploration for Online Change-Point Detection in Activity Monitoring Using Genetic Algorithms

    PubMed Central

    Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris

    2016-01-01

    In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%. PMID:27792177

  2. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information

    PubMed Central

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  3. The MATPHOT Algorithm for Accurate and Precise Stellar Photometry and Astrometry Using Discrete Point Spread Functions

    NASA Astrophysics Data System (ADS)

    Mighell, K. J.

    2004-12-01

    I describe the key features of my MATPHOT algorithm for accurate and precise stellar photometry and astrometry using discrete Point Spread Functions. A discrete Point Spread Function (PSF) is a sampled version of a continuous two-dimensional PSF. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or a FITS image file. The MATPHOT algorithm shifts discrete PSFs within an observational model using a 21-pixel-wide damped sinc function and position partial derivatives are computed using a five-point numerical differentiation formula. The MATPHOT algorithm achieves accurate and precise stellar photometry and astrometry of undersampled CCD observations by using supersampled discrete PSFs that are sampled 2, 3, or more times more finely than the observational data. I have written a C-language computer program called MPD which is based on the current implementation of the MATPHOT algorithm; all source code and documentation for MPD and support software is freely available at the following website: http://www.noao.edu/staff/mighell/matphot . I demonstrate the use of MPD and present a detailed MATPHOT analysis of simulated James Webb Space Telescope observations which demonstrates that millipixel relative astrometry and millimag photometric accuracy is achievable with very complicated space-based discrete PSFs. This work was supported by a grant from the National Aeronautics and Space Administration (NASA), Interagency Order No. S-13811-G, which was awarded by the Applied Information Systems Research (AISR) Program of NASA's Science Mission Directorate.

  4. A rapid and robust iterative closest point algorithm for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Barbiere, Joseph; Hanley, Joseph

    2008-03-01

    Our work presents a rapid and robust process that can analytically evaluate and correct patient setup error for head and neck radiotherapy by comparing orthogonal megavoltage portal images with digitally reconstructed radiographs. For robust data Photoshop is used to interactively segment images and registering reference contours to the transformed PI. MatLab is used for matrix computations and image analysis. The closest point distance for each PI point to a DRR point forms a set of homologous points. The translation that aligns the PI to the DRR is equal to the difference in centers of mass. The original PI points are transformed and the process repeated with an Iterative Closest Point algorithm until the transformation change becomes negligible. Using a 3.00 GHz processor the calculation of the 2500x1750 CPD matrix takes about 150 sec per iteration. Standard down sampling to about 1000 DRR and 250 PI points significantly reduces that time. We introduce a local neighborhood matrix consisting of a small subset of the DRR points in the vicinity of each PI point to further reduce the CPD matrix size. Our results demonstrate the effects of down sampling on accuracy. For validation, analytical detailed results are displayed as a histogram.

  5. Fixed-point analysis and realization of a blind beamforming algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Fu, Dengwei; Willson, Alan N.

    1999-11-01

    We present the fixed-point analysis and realization of a blind beamforming algorithm. This maximum-power beamforming algorithm consists of the computation of a correlation matrix and its dominant eigenvector, and we propose that the later be accomplished by the power method. After analyzing the numerical stability of the power method, we derive a division-free form of the algorithm. Based on a block-Toeplitz assumption, we design an FIR filter based system to realize both the correlation computation and the power method. Our ring processor, which is optimized to implement digital filters, is used as the core of the architecture. A special technique for dynamically switching filter inputs is shown to double the system throughput. Finally we discuss the issue of hardware/software hybrid realization.

  6. An Efficient Implementation of the Sign LMS Algorithm Using Block Floating Point Format

    NASA Astrophysics Data System (ADS)

    Chakraborty, Mrityunjoy; Shaik, Rafiahamed; Lee, Moon Ho

    2007-12-01

    An efficient scheme is presented for implementing the sign LMS algorithm in block floating point format, which permits processing of data over a wide dynamic range at a processor complexity and cost as low as that of a fixed point processor. The proposed scheme adopts appropriate formats for representing the filter coefficients and the data. It also employs a scaled representation for the step-size that has a time-varying mantissa and also a time-varying exponent. Using these and an upper bound on the step-size mantissa, update relations for the filter weight mantissas and exponent are developed, taking care so that neither overflow occurs, nor are quantities which are already very small multiplied directly. Separate update relations are also worked out for the step size mantissa. The proposed scheme employs mostly fixed-point-based operations, and thus achieves considerable speedup over its floating-point-based counterpart.

  7. Experimental comparison of filter algorithms for bare-Earth extraction from airborne laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Sithole, George; Vosselman, George

    Over the past years, several filters have been developed to extract bare-Earth points from point clouds. ISPRS Working Group III/3 conducted a test to determine the performance of these filters and the influence of point density thereon, and to identify directions for future research. Twelve selected datasets have been processed by eight participants. In this paper, the test results are presented. The paper describes the characteristics of the provided datasets and the used filter approaches. The filter performance is analysed both qualitatively and quantitatively. All filters perform well in smooth rural landscapes, but all produce errors in complex urban areas and rough terrain with vegetation. In general, filters that estimate local surfaces are found to perform best. The influence of point density could not well be determined in this experiment. Future research should be directed towards the usage of additional data sources, segment-based classification, and self-diagnosis of filter algorithms.

  8. Sequential structural damage diagnosis algorithm using a change point detection method

    NASA Astrophysics Data System (ADS)

    Noh, H.; Rajagopal, R.; Kiremidjian, A. S.

    2013-11-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  9. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    NASA Technical Reports Server (NTRS)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  10. The expectation maximization algorithm applied to the search of point sources of astroparticles

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan Antonio; Hernández-Rey, Juan José

    2008-03-01

    The expectation-maximization algorithm, widely employed in cluster and pattern recognition analysis, is proposed in this article for the search of point sources of astroparticles. We show how to adapt the method for the particular case in which a faint source signal over a large background is expected. In particular, the method is applied to the point source search in neutrino telescopes. A generic neutrino telescope of an area of 1 km2 located in the Mediterranean Sea has been simulated. Results in terms of minimum detectable number of events are given and the method is compared advantageously with the results of a classical method with binning.

  11. Thickness Gauging of Single-Layer Conductive Materials with Two-Point Non Linear Calibration Algorithm

    NASA Technical Reports Server (NTRS)

    Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)

    1998-01-01

    A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.

  12. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    PubMed Central

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  13. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  14. Extension of an iterative closest point algorithm for simultaneous localization and mapping in corridor environments

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua

    2016-03-01

    Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.

  15. An Error Analysis of the Phased Array Antenna Pointing Algorithm for STARS Flight Demonstration No. 2

    NASA Technical Reports Server (NTRS)

    Carney, Michael P.; Simpson, James C.

    2005-01-01

    STARS is a multicenter NASA project to determine the feasibility of using space-based assets, such as the Tracking and Data Relay Satellite System (TDRSS) and Global Positioning System (GPS), to increase flexibility (e.g. increase the number of possible launch locations and manage simultaneous operations) and to reduce operational costs by decreasing the need for ground-based range assets and infrastructure. The STARS project includes two major systems: the Range Safety and Range User systems. The latter system uses broadband communications (125 kbps to 500 kbps) for voice, video, and vehicle/payload data. Flight Demonstration #1 revealed the need to increase the data rate of the Range User system. During Flight Demo #2, a Ku-band antenna will generate a higher data rate and will be designed with an embedded pointing algorithm to guarantee that the antenna is pointed directly at TDRS. This algorithm will utilize the onboard position and attitude data to point the antenna to TDRS within a 2-degree full-angle beamwidth. This report investigates how errors in aircraft position and attitude, along with errors in satellite position, propagate into the overall pointing vector.

  16. Floating-Point Units and Algorithms for field-programmable gate arrays

    SciTech Connect

    Underwood, Keith D.; Hemmert, K. Scott

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and we are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and

  17. Ferromagnetic Mass Localization in Check Point Configuration Using a Levenberg Marquardt Algorithm

    PubMed Central

    Alimi, Roger; Geron, Nir; Weiss, Eyal; Ram-Cohen, Tsuriel

    2009-01-01

    A detection and tracking algorithm for ferromagnetic objects based on a two stage Levenberg Marquardt Algorithm (LMA) is presented. The procedure is applied to localization and magnetic moment estimation of ferromagnetic objects moving in the vicinity of an array of two to four 3-axis magnetometers arranged as a check point configuration. The algorithms first stage provides an estimation of the target trajectory and moment that are further refined using a second iteration where only the position vector is taken as unknown. The whole procedure is fast enough to provide satisfactory results within a few seconds after the target has been detected. Tests were conducted in Soreq NRC assessing various check point scenarios and targets. The results obtained from this experiment show good localization performance and good convivial with “noisy” environment. Small targets can be localized with good accuracy using either a vertical “doorway” two to four sensors configuration or ground level two to four sensors configuration. The calculated trajectory was not affected by nearby magnetic interference such as moving vehicles or a combat soldier inspecting the gateway. PMID:22291540

  18. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean

    2016-05-01

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  19. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  20. A new method for automatically measuring Vickers hardness based on region-point detection algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Yong; Shan, Yuekang; Ji, Yu; Zhang, Shibo

    2008-12-01

    This paper presents a new method for automatically analyzing the digital image of Vickers hardness indentation called Region-Point detection algorithm. This method effectively overcomes the error of vertex detection due to curving indentation edges. In the Region-Detection, to obtain four small regions where the four vertexes locate, Sobel Operator is implemented to extract the edge points and Thick-line Hough Transform is utilized to fit the edge lines, then the four regions are selected according to the four intersection points of the thick lines. In the Point-Detection, to get the vertex's accurate position in every small region, Thick-line Hough Transform is used again to select useful edge points and Last Square Method is utilized to accurately fit lines. The interception point of the two lines in every region is the vertex of indentation. Then the length of the diagonal and the Vickers hardness could be calculated. Experiments show that the measured values agreed well with the standard values

  1. TU-F-18A-04: Use of An Image-Based Material-Decomposition Algorithm for Multi-Energy CT to Determine Basis Material Densities

    SciTech Connect

    Li, Z; Leng, S; Yu, L; McCollough, C

    2014-06-15

    Purpose: Published methods for image-based material decomposition with multi-energy CT images have required the assumption of volume conservation or accurate knowledge of the x-ray spectra and detector response. The purpose of this work was to develop an image-based material-decomposition algorithm that can overcome these limitations. Methods: An image-based material decomposition algorithm was developed that requires only mass conservation (rather than volume conservation). With this method, using multi-energy CT measurements made with n=4 energy bins, the mass density of each basis material and of the mixture can be determined without knowledge of the tube spectra and detector response. A digital phantom containing 12 samples of mixtures from water, calcium, iron, and iodine was used in the simulation (Siemens DRASIM). The calibration was performed by using pure materials at each energy bin. The accuracy of the technique was evaluated in noise-free and noisy data under the assumption of an ideal photon-counting detector. Results: Basis material densities can be estimated accurately by either theoretic calculation or calibration with known pure materials. The calibration approach requires no prior information about the spectra and detector response. Regression analysis of theoretical values versus estimated values results in excellent agreement for both noise-free and noisy data. For the calibration approach, the R-square values are 0.9960+/−0.0025 and 0.9476+/−0.0363 for noise-free and noisy data, respectively. Conclusion: From multi-energy CT images with n=4 energy bins, the developed image-based material decomposition method accurately estimated 4 basis material density (3 without k-edge and 1 with in the range of the simulated energy bins) even without any prior information about spectra and detector response. This method is applicable to mixtures of solutions and dissolvable materials, where volume conservation assumptions do not apply. CHM receives

  2. Automatic Detection and Extraction Algorithm of Inter-Granular Bright Points

    NASA Astrophysics Data System (ADS)

    Feng, Song; Ji, Kai-fan; Deng, Hui; Wang, Feng; Fu, Xiao-dong

    2012-12-01

    Inter-granular Bright Points (igBPs) are small-scale objects in the Solar photosphere which can be seen within dark inter-granular lanes. We present a new algorithm to automatically detect and extract igBPs. Laplacian and Morphological Dilation (LMD) technique is employed by the algorithm. It involves three basic processing steps: (1) obtaining candidate ``seed" regions by Laplacian; (2) determining the boundary and size of igBPs by morphological dilation; (3) discarding brighter granules by a probability criterion. For validating our algorithm, we used the observed samples of the Dutch Open Telescope (DOT), collected on April 12, 2007. They contain 180 high-resolution images, and each has a 85 × 68 arcsec^{2} field of view (FOV). Two important results are obtained: first, the identified rate of igBPs reaches 95% and is higher than previous results; second, the diameter distribution is 220 ± 25 km, which is fully consistent with previously published data. We conclude that the presented algorithm can detect and extract igBPs automatically and effectively.

  3. MO-FG-204-03: Using Edge-Preserving Algorithm for Significantly Improved Image-Domain Material Decomposition in Dual Energy CT

    SciTech Connect

    Zhao, W; Niu, T; Xing, L; Xiong, G; Elmore, K; Min, J; Zhu, J; Wang, L

    2015-06-15

    Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leading resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR

  4. [Determination of Virtual Surgery Mass Point Spring Model Parameters Based on Genetic Algorithms].

    PubMed

    Chen, Ying; Hu, Xuyi; Zhu, Qiguang

    2015-12-01

    Mass point-spring model is one of the commonly used models in virtual surgery. However, its model parameters have no clear physical meaning, and it is hard to set the parameter conveniently. We, therefore, proposed a method based on genetic algorithm to determine the mass-spring model parameters. Computer-aided tomography (CAT) data were used to determine the mass value of the particle, and stiffness and damping coefficient were obtained by genetic algorithm. We used the difference between the reference deformation and virtual deformation as the fitness function to get the approximate optimal solution of the model parameters. Experimental results showed that this method could obtain an approximate optimal solution of spring parameters with lower cost, and could accurately reproduce the effect of the actual deformation model as well.

  5. A spectral collocation algorithm for two-point boundary value problem in fiber Raman amplifier equations

    NASA Astrophysics Data System (ADS)

    Tarman, Hakan I.; Berberoğlu, Halil

    2009-04-01

    A novel algorithm implementing Chebyshev spectral collocation (pseudospectral) method in combination with Newton's method is proposed for the nonlinear two-point boundary value problem (BVP) arising in solving propagation equations in fiber Raman amplifier. Moreover, an algorithm to train the known linear solution for use as a starting solution for the Newton iteration is proposed and successfully implemented. The exponential accuracy obtained by the proposed Chebyshev pseudospectral method is demonstrated on a case of the Raman propagation equations with strong nonlinearities. This is in contrast to algebraic accuracy obtained by typical solvers used in the literature. The resolving power and the efficiency of the underlying Chebyshev grid are demonstrated in comparison to a known BVP solver.

  6. [Automatic segmentation of clustered breast cancer cells based on modified watershed algorithm and concavity points searching].

    PubMed

    Tong, Zhen; Pu, Lixin; Dong, Fangjie

    2013-08-01

    As a common malignant tumor, breast cancer has seriously affected women's physical and psychological health even threatened their lives. Breast cancer has even begun to show a gradual trend of high incidence in some places in the world. As a kind of common pathological assist diagnosis technique, immunohistochemical technique plays an important role in the diagnosis of breast cancer. Usually, Pathologists isolate positive cells from the stained specimen which were processed by immunohistochemical technique and calculate the ratio of positive cells which is a core indicator of breast cancer in diagnosis. In this paper, we present a new algorithm which was based on modified watershed algorithm and concavity points searching to identify the positive cells and segment the clustered cells automatically, and then realize automatic counting. By comparison of the results of our experiments with those of other methods, our method can exactly segment the clustered cells without losing any geometrical cell features and give the exact number of separating cells.

  7. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    NASA Astrophysics Data System (ADS)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  8. Comparison of dermatoscopic diagnostic algorithms based on calculation: The ABCD rule of dermatoscopy, the seven-point checklist, the three-point checklist and the CASH algorithm in dermatoscopic evaluation of melanocytic lesions.

    PubMed

    Unlu, Ezgi; Akay, Bengu N; Erdem, Cengizhan

    2014-07-01

    Dermatoscopic analysis of melanocytic lesions using the CASH algorithm has rarely been described in the literature. The purpose of this study was to compare the sensitivity, specificity, and diagnostic accuracy rates of the ABCD rule of dermatoscopy, the seven-point checklist, the three-point checklist, and the CASH algorithm in the diagnosis and dermatoscopic evaluation of melanocytic lesions on the hairy skin. One hundred and fifteen melanocytic lesions of 115 patients were examined retrospectively using dermatoscopic images and compared with the histopathologic diagnosis. Four dermatoscopic algorithms were carried out for all lesions. The ABCD rule of dermatoscopy showed sensitivity of 91.6%, specificity of 60.4%, and diagnostic accuracy of 66.9%. The seven-point checklist showed sensitivity, specificity, and diagnostic accuracy of 87.5, 65.9, and 70.4%, respectively; the three-point checklist 79.1, 62.6, 66%; and the CASH algorithm 91.6, 64.8, and 70.4%, respectively. To our knowledge, this is the first study that compares the sensitivity, specificity and diagnostic accuracy of the ABCD rule of dermatoscopy, the three-point checklist, the seven-point checklist, and the CASH algorithm for the diagnosis of melanocytic lesions on the hairy skin. In our study, the ABCD rule of dermatoscopy and the CASH algorithm showed the highest sensitivity for the diagnosis of melanoma.

  9. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  10. GLOBAL PEAK ALIGNMENT FOR COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY USING POINT MATCHING ALGORITHMS

    PubMed Central

    Deng, Beichuan; Kim, Seongho; Li, Hengguang; Heath, Elisabeth; Zhang, Xiang

    2016-01-01

    Comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has been used to analyze multiple samples in a metabolomics study. However, due to some uncontrollable experimental conditions, such as the differences in temperature or pressure, matrix effects on samples, and stationary phase degradation, there is always a shift of retention times in the two GC columns between samples. In order to correct the retention time shifts in GC×GC-MS, the peak alignment is a crucial data analysis step to recognize the peaks generated by the same metabolite in different samples. Two approaches have been developed for GC×GC-MS data alignment: profile alignment and peak matching alignment. However, these existing alignment methods are all based on a local alignment, resulting that a peak may not be correctly aligned in a dense chromatographic region where many peaks are present in a small region. False alignment will result in false discovery in the downstream statistical analysis. We, therefore, develop a global comparison based peak alignment method using point matching algorithm (PMA-PA) for both homogeneous and heterogeneous data. The developed algorithm PMA-PA first extracts feature points (peaks) in the chromatography and then searches globally the matching peaks in the consecutive chromatography by adopting the projection of rigid and non-rigid transformation. PMA-PA is further applied to two real experimental data sets, showing that PMA-PA is a promising peak alignment algorithm for both homogenous and heterogeneous data in terms of F1 score, although it uses only peak location information. PMID:27650662

  11. The MATPHOT Algorithm for Digital Point Spread Function CCD Stellar Photometry

    NASA Astrophysics Data System (ADS)

    Mighell, Kenneth J.

    Most CCD stellar photometric reduction packages use analytical functions to represent the stellar Point Spread Function (PSF). These PSF-fitting programs generally compute all the major partial derivatives of the observational model by differentiating the volume integral of the PSF over a pixel. Real-world PSFs are frequently very complicated and may not be exactly representable with any combination of analytical functions. Deviations of the real-world PSF from the analytical PSF are then generally stored in a residual matrix. Diffraction rings and spikes can provide a great deal of information about the position of a star, yet information about such common observational effects generally resides only in the residual matrix. Such useful information is generally not used in the PSF-fitting process except for the final step involving the determination of the chi-square goodness-of-fit between the CCD observation and the model where the intensity-scaled residual matrix is added to the mathematical model of the observation just before the goodness-of-fit is computed. I describe some of the key features of my MATPHOT algorithm for digital PSF-fitting CCD stellar photometry where the PSF is represented by a matrix of numbers. The mathematics of determining the partial derivatives of the observational model with respect to the x and y direction vectors is exactly the same with analytical or digital PSFs. The implementation methodology, however, is quite different. In the case of digital PSFs, the partial derivatives can be determined using numerical differentiation techniques on the digital PSFs. I compare the advantages and disadvantages with respect to traditional PSF-fitting algorithms based on analytical representations of the PSF. The MATPHOT algorithm is an ideal candidate for parallel processing. Instead of operating in the traditional single-processor mode of analyzing one pixel at a time, the MATPHOT algorithm can be written to operate on an image-plane basis

  12. a Novel Image Registration Algorithm for SAR and Optical Images Based on Virtual Points

    NASA Astrophysics Data System (ADS)

    Ai, C.; Feng, T.; Wang, J.; Zhang, S.

    2013-07-01

    Optical image is rich in spectral information, while SAR instrument can work in both day and night and obtain images through fog and clouds. Combination of these two types of complementary images shows the great advantages of better image interpretation. Image registration is an inevitable and critical problem for the applications of multi-source remote sensing images, such as image fusion, pattern recognition and change detection. However, the different characteristics between SAR and optical images, which are due to the difference in imaging mechanism and the speckle noises in SAR image, bring great challenges to the multi-source image registration. Therefore, a novel image registration algorithm based on the virtual points, derived from the corresponding region features, is proposed in this paper. Firstly, image classification methods are adopted to extract closed regions from SAR and optical images respectively. Secondly, corresponding region features are matched by constructing cost function with rotate invariant region descriptors such as area, perimeter, and the length of major and minor axes. Thirdly, virtual points derived from corresponding region features, such as the centroids, endpoints and cross points of major and minor axes, are used to calculate initial registration parameters. Finally, the parameters are corrected by an iterative calculation, which will be terminated when the overlap of corresponding region features reaches its maximum. In the experiment, WordView-2 and Radasat-2 with 0.5 m and 4.7 m spatial resolution respectively, obtained in August 2010 in Suzhou, are used to test the registration method. It is shown that the multi-source image registration algorithm presented above is effective, and the accuracy of registration is up to pixel level.

  13. Using SDO and GONG as Calibration References for a New Telescope Pointing Algorithm

    NASA Astrophysics Data System (ADS)

    Staiger, J.

    2013-12-01

    Long duration observations are a basic requirement for most types of helioseismic measurements. Pointing stability and the quality of guiding is thus an important issue with respect to the spatio-temporal analysis of any velocity datasets. Existing pointing tools and correlation-tracking devices will help to remove most of the spatial deviations building up during an observation with time. Yet most ground- and space-based high-resolution solar telescopes may be subject to slow image-plane drift that cannot be compensated for by guiding and which may accumulate to displacements of 10″ or more during a 10-hour recording. We have developed a new pointing model for solar telescopes that may overcome these inherent guiding-limitations. We have tested the model at the Vacuum Tower Telescope (VTT), Tenerife. We are using SDO and GONG full-disk imaging as a calibration reference. We describe the algorithms developed and used during the tests. We present our first results. We describe possible future applications as to be implemented at the VTT. So far, improvements over classical limb-guider systems by a factor of 10 or more seem possible.

  14. A Full-Newton Step Infeasible Interior-Point Algorithm for Linear Programming Based on a Kernel Function

    SciTech Connect

    Liu Zhongyi Sun, Wenyu Tian Fangbao

    2009-10-15

    This paper proposes an infeasible interior-point algorithm with full-Newton step for linear programming, which is an extension of the work of Roos (SIAM J. Optim. 16(4):1110-1136, 2006). The main iteration of the algorithm consists of a feasibility step and several centrality steps. We introduce a kernel function in the algorithm to induce the feasibility step. For parameter p element of [0,1], the polynomial complexity can be proved and the result coincides with the best result for infeasible interior-point methods, that is, O(nlog n/{epsilon})

  15. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  16. Sunspots and Coronal Bright Points Tracking using a Hybrid Algorithm of PSO and Active Contour Model

    NASA Astrophysics Data System (ADS)

    Dorotovic, I.; Shahamatnia, E.; Lorenc, M.; Rybansky, M.; Ribeiro, R. A.; Fonseca, J. M.

    2014-02-01

    In the last decades there has been a steady increase of high-resolution data, from ground-based and space-borne solar instruments, and also of solar data volume. These huge image archives require efficient automatic image processing software tools capable of detecting and tracking various features in the solar atmosphere. Results of application of such tools are essential for studies of solar activity evolution, climate change understanding and space weather prediction. The follow up of interplanetary and near-Earth phenomena requires, among others, automatic tracking algorithms that can determine where a feature is located, on successive images taken along the period of observation. Full-disc solar images, obtained both with the ground-based solar telescopes and the instruments onboard the satellites, provide essential observational material for solar physicists and space weather researchers for better understanding the Sun, studying the evolution of various features in the solar atmosphere, and also investigating solar differential rotation by tracking such features along time. Here we demonstrate and discuss the suitability of applying a hybrid Particle Swarm Optimization (PSO) algorithm and Active Contour model for tracking and determining the differential rotation of sunspots and coronal bright points (CBPs) on a set of selected solar images. The results obtained confirm that the proposed approach constitutes a promising tool for investigating the evolution of solar activity and also for automating tracking features on massive solar image archives.

  17. Genetic algorithm optimization of point charges in force field development: challenges and insights.

    PubMed

    Ivanov, Maxim V; Talipov, Marat R; Timerghazin, Qadir K

    2015-02-26

    Evolutionary methods, such as genetic algorithms (GAs), provide powerful tools for optimization of the force field parameters, especially in the case of simultaneous fitting of the force field terms against extensive reference data. However, GA fitting of the nonbonded interaction parameters that includes point charges has not been explored in the literature, likely due to numerous difficulties with even a simpler problem of the least-squares fitting of the atomic point charges against a reference molecular electrostatic potential (MEP), which often demonstrates an unusually high variation of the fitted charges on buried atoms. Here, we examine the performance of the GA approach for the least-squares MEP point charge fitting, and show that the GA optimizations suffer from a magnified version of the classical buried atom effect, producing highly scattered yet correlated solutions. This effect can be understood in terms of the linearly independent, natural coordinates of the MEP fitting problem defined by the eigenvectors of the least-squares sum Hessian matrix, which are also equivalent to the eigenvectors of the covariance matrix evaluated for the scattered GA solutions. GAs quickly converge with respect to the high-curvature coordinates defined by the eigenvectors related to the leading terms of the multipole expansion, but have difficulty converging with respect to the low-curvature coordinates that mostly depend on the buried atom charges. The performance of the evolutionary techniques dramatically improves when the point charge optimization is performed using the Hessian or covariance matrix eigenvectors, an approach with a significant potential for the evolutionary optimization of the fixed-charge biomolecular force fields.

  18. A fast algorithm for finding point sources in the Fermi data stream: FermiFAST

    NASA Astrophysics Data System (ADS)

    Asvathaman, Asha; Omand, Conor; Barton, Alistair; Heyl, Jeremy S.

    2017-04-01

    We present a new and efficient algorithm for finding point sources in the photon event data stream from the Fermi Gamma-Ray Space Telescope, FermiFAST. The key advantage of FermiFAST is that it constructs a catalogue of potential sources very fast by arranging the photon data in a hierarchical data structure. Using this structure, FermiFAST rapidly finds the photons that could have originated from a potential gamma-ray source. It calculates a likelihood ratio for the contribution of the potential source using the angular distribution of the photons within the region of interest. It can find within a few minutes the most significant half of the Fermi Third Point Source catalogue (3FGL) with nearly 80 per cent purity from the 4 yr of data used to construct the catalogue. If a higher purity sample is desirable, one can achieve a sample that includes the most significant third of the Fermi 3FGL with only 5 per cent of the sources unassociated with Fermi sources. Outside the Galactic plane, all but eight of the 580 FermiFAST detections are associated with 3FGL sources. And of these eight, six yield significant detections of greater than 5σ when a further binned likelihood analysis is performed. This software allows for rapid exploration of the Fermi data, simulation of the source detection to calculate the selection function of various sources and the errors in the obtained parameters of the sources detected.

  19. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    NASA Astrophysics Data System (ADS)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  20. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  1. Using second-order calibration method based on trilinear decomposition algorithms coupled with high performance liquid chromatography with diode array detector for determination of quinolones in honey samples.

    PubMed

    Yu, Yong-Jie; Wu, Hai-Long; Shao, Sheng-Zhi; Kang, Chao; Zhao, Juan; Wang, Yu; Zhu, Shao-Hua; Yu, Ru-Qin

    2011-09-15

    A novel strategy that combines the second-order calibration method based on the trilinear decomposition algorithms with high performance liquid chromatography with diode array detector (HPLC-DAD) was developed to mathematically separate the overlapped peaks and to quantify quinolones in honey samples. The HPLC-DAD data were obtained within a short time in isocratic mode. The developed method could be applied to determine 12 quinolones at the same time even in the presence of uncalibrated interfering components in complex background. To access the performance of the proposed strategy for the determination of quinolones in honey samples, the figures of merit were employed. The limits of quantitation for all analytes were within the range 1.2-56.7 μg kg(-1). The work presented in this paper illustrated the suitability and interesting potential of combining second-order calibration method with second-order analytical instrument for multi-residue analysis in honey samples.

  2. Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data

    NASA Astrophysics Data System (ADS)

    Alcolea, Andres; Renard, Philippe

    2010-08-01

    Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations

  3. Decomposition of MATLAB script for FPGA implementation of real time simulation algorithms for LLRF system in European XFEL

    NASA Astrophysics Data System (ADS)

    Bujnowski, K.; Pucyk, P.; Pozniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The European XFEL project uses the LLRF system for stabilization of a vector sum of the RF field in 32 superconducting cavities. A dedicated, high performance photonics and electronics and software was built. To provide high system availability an appropriate test environment as well as diagnostics was designed. A real time simulation subsystem was designed which is based on dedicated electronics using FPGA technology and robust simulation models implemented in VHDL. The paper presents an architecture of the system framework which allows for easy and flexible conversion of MATLAB language structures directly into FPGA implementable grid of parameterized and simple DSP processors. The decomposition of MATLAB grammar was described as well as optimization process and FPGA implementation issues.

  4. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  5. A real-time plane-wave decomposition algorithm for characterizing perforated liners damping at multiple mode frequencies.

    PubMed

    Zhao, Dan

    2011-03-01

    Perforated liners with a narrow frequency range are widely used as acoustic dampers to stabilize combustion systems. When the frequency of unstable modes present in the combustion system is within the effective frequency range, the liners can efficiently dissipate acoustic waves. The fraction of the incident waves being absorbed (known as power absorption coefficient) is generally used to characterize the liners damping. To estimate it, plane waves either side of the liners need to be decomposed and characterized. For this, a real-time algorithm is developed. Emphasis is being placed on its ability to online decompose plane waves at multiple mode frequencies. The performance of the algorithm is evaluated first in a numerical model with two unstable modes. It is then experimentally implemented in an acoustically driven pipe system with a lined section attached. The acoustic damping of perforated liners is continuously characterized in real-time. Comparison is then made between the results from the algorithm and those from the short-time fast Fourier transform (FFT)-based techniques, which are typically used in industry. It was found that the real-time algorithm allows faster tracking of the liners damping, even when the forcing frequency was suddenly changed.

  6. Current review and a simplified "five-point management algorithm" for keratoconus.

    PubMed

    Shetty, Rohit; Kaweri, Luci; Pahuja, Natasha; Nagaraja, Harsha; Wadia, Kareeshma; Jayadev, Chaitra; Nuijts, Rudy; Arora, Vishal

    2015-01-01

    Keratoconus is a slowly progressive, noninflammatory ectatic corneal disease characterized by changes in corneal collagen structure and organization. Though the etiology remains unknown, novel techniques are continuously emerging for the diagnosis and management of the disease. Demographical parameters are known to affect the rate of progression of the disease. Common methods of vision correction for keratoconus range from spectacles and rigid gas-permeable contact lenses to other specialized lenses such as piggyback, Rose-K or Boston scleral lenses. Corneal collagen cross-linking is effective in stabilizing the progression of the disease. Intra-corneal ring segments can improve vision by flattening the cornea in patients with mild to moderate keratoconus. Topography-guided custom ablation treatment betters the quality of vision by correcting the refractive error and improving the contact lens fit. In advanced keratoconus with corneal scarring, lamellar or full thickness penetrating keratoplasty will be the treatment of choice. With such a wide spectrum of alternatives available, it is necessary to choose the best possible treatment option for each patient. Based on a brief review of the literature and our own studies we have designed a five-point management algorithm for the treatment of keratoconus.

  7. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    presentehat shows the readback delay does not have a negative impact on gimbal control. The decision was made to consider implementing two of the jitter mitigation techniques on board the spacecraft: stagger stepping and the NSR. Flight data from two sets of handovers, one set without jitter mitigation and the other with mitigation enabled, were examined. The trajectory of the predicted handover was compared with the measured trajectory for the two cases, showing that tracking was not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. In this paper, the flight results are examined from a test where the HGAs are following the path of a nominal handover with stagger stepping on and HMI NSRs enabled. In this case, the reaction wheels are moving at low speed and the instruments are taking pictures in their standard sequence. The flight data shows the level of jitter that the instruments see when their shutters are open. The HGA-induced jitter is well within the jitter requirement when the stagger step and NSR mitigation options are enabled. The SDO HGA pointing algorithm was designed to achieve nominal antenna pointing at the ground station, perform slews during handover season, and provide three HGA-induced jitter mitigation options without compromising pointing objectives. During the commissioning phase, flight data sets were collected to verify the HGA pointing algorithm and demonstrate its jitter mitigation capabilities.

  8. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.

    PubMed

    Karthikeyan, M; Raja, T Sree Ranga

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.

  9. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  10. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    SciTech Connect

    Chao, R.M.; Ko, S.H.; Lin, I.H.; Pai, F.S.; Chang, C.C.

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  11. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Clinical Evaluation.

    PubMed

    Hyodo, Tomoko; Yada, Norihisa; Hori, Masatoshi; Maenishi, Osamu; Lamb, Peter; Sasaki, Kosuke; Onoda, Minori; Kudo, Masatoshi; Mochizuki, Teruhito; Murakami, Takamichi

    2017-04-01

    Purpose To assess the clinical accuracy and reproducibility of liver fat quantification with the multimaterial decomposition (MMD) algorithm, comparing the performance of MMD with that of magnetic resonance (MR) spectroscopy by using liver biopsy as the reference standard. Materials and Methods This prospective study was approved by the institutional ethics committee, and patients provided written informed consent. Thirty-three patients suspected of having hepatic steatosis underwent non-contrast material-enhanced and triple-phase dynamic contrast-enhanced dual-energy computed tomography (CT) (80 and 140 kVp) and single-voxel proton MR spectroscopy within 30 days before liver biopsy. Percentage fat volume fraction (FVF) images were generated by using the MMD algorithm on dual-energy CT data to measure hepatic fat content. FVFs determined by using dual-energy CT and percentage fat fractions (FFs) determined by using MR spectroscopy were compared with histologic steatosis grade (0-3, as defined by the nonalcoholic fatty liver disease activity score system) by using Jonckheere-Terpstra trend tests and were compared with each other by using Bland-Altman analysis. Real non-contrast-enhanced FVFs were compared with triple-phase contrast-enhanced FVFs to determine the reproducibility of MMD by using Bland-Altman analyses. Results Both dual-energy CT FVF and MR spectroscopy FF increased with increasing histologic steatosis grade (trend test, P < .001 for each). The Bland-Altman plot of dual-energy CT FVF and MR spectroscopy FF revealed a proportional bias, as indicated by the significant positive slope of the line regressing the difference on the average (P < .001). The 95% limits of agreement for the differences between real non-contrast-enhanced and contrast-enhanced FVFs were not greater than about 2%. Conclusion The MMD algorithm quantifying hepatic fat in dual-energy CT images is accurate and reproducible across imaging phases. (©) RSNA, 2017 Online supplemental

  12. A generalized version of a two point boundary value problem guidance algorithm

    NASA Astrophysics Data System (ADS)

    Kelly, W. D.

    An iterative guidance algorithm known as a minimum Hamiltonian method is used for performance analyses of launch vehicles in personal-computer trajectory simulations. Convergence in this application is rapid for a minimum-time-of-flight upper-stage solution. Examination of the coded algorithm resulted in a reformulation in which problem-specific portions of the code were separated from portions that were shared by problems in general. More generalized problem inputs were included to operate the algorithm based on varied numbers of state variables, terminal constraints, and controls, preparing for other applications the basic algorithm applied to ascent guidance. In most cases, including entry, the compact form of the algorithm along with the capability to converge rapidly makes it a contender for autonomous guidance aboard a powered flight vehicle.

  13. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis

    NASA Astrophysics Data System (ADS)

    Portes, Leonardo L.; Aguirre, Luis A.

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011), 10.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA.

  14. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis.

    PubMed

    Portes, Leonardo L; Aguirre, Luis A

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011)PLEEE81539-375510.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA.

  15. MODIS 250m burned area mapping based on an algorithm using change point detection and Markov random fields.

    NASA Astrophysics Data System (ADS)

    Mota, Bernardo; Pereira, Jose; Campagnolo, Manuel; Killick, Rebeca

    2013-04-01

    Area burned in tropical savannas of Brazil was mapped using MODIS-AQUA daily 250m resolution imagery by adapting one of the European Space Agency fire_CCI project burned area algorithms, based on change point detection and Markov random fields. The study area covers 1,44 Mkm2 and was performed with data from 2005. The daily 1000 m image quality layer was used for cloud and cloud shadow screening. The algorithm addresses each pixel as a time series and detects changes in the statistical properties of NIR reflectance values, to identify potential burning dates. The first step of the algorithm is robust filtering, to exclude outlier observations, followed by application of the Pruned Exact Linear Time (PELT) change point detection technique. Near-infrared (NIR) spectral reflectance changes between time segments, and post change NIR reflectance values are combined into a fire likelihood score. Change points corresponding to an increase in reflectance are dismissed as potential burn events, as are those occurring outside of a pre-defined fire season. In the last step of the algorithm, monthly burned area probability maps and detection date maps are converted to dichotomous (burned-unburned maps) using Markov random fields, which take into account both spatial and temporal relations in the potential burned area maps. A preliminary assessment of our results is performed by comparison with data from the MODIS 1km active fires and the 500m burned area products, taking into account differences in spatial resolution between the two sensors.

  16. Point matching under non-uniform distortions and protein side chain packing based on an efficient maximum clique algorithm.

    PubMed

    Dukka, Bahadur K C; Akutsu, Tatsuya; Tomita, Etsuji; Seki, Tomokazu; Fujiyama, Asao

    2002-01-01

    We developed maximum clique-based algorithms for spot matching for two-dimensional gel electrophoresis images, protein structure alignment and protein side-chain packing, where these problems are known to be NP-hard. Algorithms based on direct reductions to the maximum clique can find optimal solutions for instances of size (the number of points or residues) up to 50-150 using a standard PC. We also developed pre-processing techniques to reduce the sizes of graphs. Combined with some heuristics, many realistic instances can be solved approximately.

  17. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  18. The collapsed cone algorithm for 192Ir dosimetry using phantom-size adaptive multiple-scatter point kernels

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-01

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  19. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    PubMed

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  20. Algorithms for the analysis of ensemble neural spiking activity using simultaneous-event multivariate point-process models.

    PubMed

    Ba, Demba; Temereanca, Simona; Brown, Emery N

    2014-01-01

    Understanding how ensembles of neurons represent and transmit information in the patterns of their joint spiking activity is a fundamental question in computational neuroscience. At present, analyses of spiking activity from neuronal ensembles are limited because multivariate point process (MPP) models cannot represent simultaneous occurrences of spike events at an arbitrarily small time resolution. Solo recently reported a simultaneous-event multivariate point process (SEMPP) model to correct this key limitation. In this paper, we show how Solo's discrete-time formulation of the SEMPP model can be efficiently fit to ensemble neural spiking activity using a multinomial generalized linear model (mGLM). Unlike existing approximate procedures for fitting the discrete-time SEMPP model, the mGLM is an exact algorithm. The MPP time-rescaling theorem can be used to assess model goodness-of-fit. We also derive a new marked point-process (MkPP) representation of the SEMPP model that leads to new thinning and time-rescaling algorithms for simulating an SEMPP stochastic process. These algorithms are much simpler than multivariate extensions of algorithms for simulating a univariate point process, and could not be arrived at without the MkPP representation. We illustrate the versatility of the SEMPP model by analyzing neural spiking activity from pairs of simultaneously-recorded rat thalamic neurons stimulated by periodic whisker deflections, and by simulating SEMPP data. In the data analysis example, the SEMPP model demonstrates that whisker motion significantly modulates simultaneous spiking activity at the 1 ms time scale and that the stimulus effect is more than one order of magnitude greater for simultaneous activity compared with non-simultaneous activity. Together, the mGLM, the MPP time-rescaling theorem and the MkPP representation of the SEMPP model offer a theoretically sound, practical tool for measuring joint spiking propensity in a neuronal ensemble.

  1. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    SciTech Connect

    Atanassov, E.; Dimitrov, D. E-mail: emanouil@parallel.bas.bg Gurov, T.

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  2. Comparison of point target detection algorithms for space-based scanning infrared sensors

    NASA Astrophysics Data System (ADS)

    Namoos, Omar M.; Schulenburg, Nielson W.

    1995-09-01

    The tracking of resident space objects (RSO) by space-based sensors can lead to engagements that result in stressing backgrounds. These backgrounds, including hard earth, earth limb, and zodiacal, pose various difficulties for signal processing algorithms designed to detect and track the target with a minimum of false alarms. Simulated RSO engagements were generated using the Strategic Scene Generator Model and a sensor model to create focal plane scenes. Using this data, the performance of several detection algorithms has been quantified for space, earth limb and cluttered hard earth backgrounds. These algorithms consist of an adaptive spatial filter, a transversal (matched) filters, and a median variance (nonlinear) filter. Signal-to-clutter statistics of the filtered scenes are compared to those of the unfiltered scene. False alarm and detection results are included. Based on these findings, a suggested processing software architectures design is hypothesized.

  3. A uniform energy consumption algorithm for wireless sensor and actuator networks based on dynamic polling point selection.

    PubMed

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2013-12-19

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation.

  4. From Tls Point Clouds to 3d Models of Trees: a Comparison of Existing Algorithms for 3d Tree Reconstruction

    NASA Astrophysics Data System (ADS)

    Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.

    2017-02-01

    3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS) data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.

  5. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  6. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  7. Optimal domain decomposition strategies

    NASA Technical Reports Server (NTRS)

    Yoon, Yonghyun; Soni, Bharat K.

    1995-01-01

    The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.

  8. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    SciTech Connect

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill; Chand, Kyle

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.

  9. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  10. Parallelization of PANDA discrete ordinates code using spatial decomposition

    SciTech Connect

    Humbert, P.

    2006-07-01

    We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7 benchmark of the OECD/NEA. (authors)

  11. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  12. Robust and Accurate Vision-Based Pose Estimation Algorithm Based on Four Coplanar Feature Points

    PubMed Central

    Zhang, Zimiao; Zhang, Shihai; Li, Qiu

    2016-01-01

    Vision-based pose estimation is an important application of machine vision. Currently, analytical and iterative methods are used to solve the object pose. The analytical solutions generally take less computation time. However, the analytical solutions are extremely susceptible to noise. The iterative solutions minimize the distance error between feature points based on 2D image pixel coordinates. However, the non-linear optimization needs a good initial estimate of the true solution, otherwise they are more time consuming than analytical solutions. Moreover, the image processing error grows rapidly with measurement range increase. This leads to pose estimation errors. All the reasons mentioned above will cause accuracy to decrease. To solve this problem, a novel pose estimation method based on four coplanar points is proposed. Firstly, the coordinates of feature points are determined according to the linear constraints formed by the four points. The initial coordinates of feature points acquired through the linear method are then optimized through an iterative method. Finally, the coordinate system of object motion is established and a method is introduced to solve the object pose. The growing image processing error causes pose estimation errors the measurement range increases. Through the coordinate system, the pose estimation errors could be decreased. The proposed method is compared with two other existing methods through experiments. Experimental results demonstrate that the proposed method works efficiently and stably. PMID:27999338

  13. Decomposing Nekrasov decomposition

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Zenkevich, Y.

    2016-02-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair "interaction" is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  14. The generalized triangular decomposition

    NASA Astrophysics Data System (ADS)

    Jiang, Yi; Hager, William W.; Li, Jian

    2008-06-01

    Given a complex matrix mathbf{H} , we consider the decomposition mathbf{H} = mathbf{QRP}^* , where mathbf{R} is upper triangular and mathbf{Q} and mathbf{P} have orthonormal columns. Special instances of this decomposition include the singular value decomposition (SVD) and the Schur decomposition where mathbf{R} is an upper triangular matrix with the eigenvalues of mathbf{H} on the diagonal. We show that any diagonal for mathbf{R} can be achieved that satisfies Weyl's multiplicative majorization conditions: prod_{iD1}^k \\vert r_{i}\\vert le prod_{iD1}^k sigma_i, ; ; 1 le k < K, quad prod_{iD1}^K \\vert r_{i}\\vert = prod_{iD1}^K sigma_i, where K is the rank of mathbf{H} , sigma_i is the i -th largest singular value of mathbf{H} , and r_{i} is the i -th largest (in magnitude) diagonal element of mathbf{R} . Given a vector mathbf{r} which satisfies Weyl's conditions, we call the decomposition mathbf{H} = mathbf{QRP}^* , where mathbf{R} is upper triangular with prescribed diagonal mathbf{r} , the generalized triangular decomposition (GTD). A direct (nonrecursive) algorithm is developed for computing the GTD. This algorithm starts with the SVD and applies a series of permutations and Givens rotations to obtain the GTD. The numerical stability of the GTD update step is established. The GTD can be used to optimize the power utilization of a communication channel, while taking into account quality of service requirements for subchannels. Another application of the GTD is to inverse eigenvalue problems where the goal is to construct matrices with prescribed eigenvalues and singular values.

  15. An Evaluation of Vegetation Filtering Algorithms for Improved Snow Depth Estimation from Point Cloud Observations in Mountain Environments

    NASA Astrophysics Data System (ADS)

    Vanderjagt, B. J.; Durand, M. T.; Lucieer, A.; Wallace, L.

    2014-12-01

    High-resolution snow depth measurements are possible through bare-earth (BE) differencing of point cloud datasets obtained using LiDAR and photogrammetry during snow-free and snow-covered conditions. These accuracy and resolution of these snow depth measurements are desirable in mountain environments in which ground measurements are dangerous and difficult to perform, and other remote sensing techniques are often characterized by large errors and uncertainties due variable topography, vegetation, and snow properties. BE ground filtering algorithms make different assumptions about ground characteristics to differentiate between ground and non-ground features. Because of this, ground surfaces may have unique characteristics that confound ground filters depending on the location and terrain conditions. These include low-lying shrubs (<1 m), areas with high topographic relief, and areas with high surface roughness. We evaluate several different algorithms, including lowest point, kriging, and more sophisticated splining techniques such as the Multiscale Curvature Classification (MCC) to resolve snow depths. Understanding how these factors affect BE surface models and thus snow depth measurements is a valuable contribution towards improving the processing protocols associated with these relatively new snow observation techniques. We test the different BE filtering algorithms using LiDAR and photogrammetric measurements taken from an Unmanned Aerial Vehicle (UAV) in Southwest Tasmania, Australia during the winter and spring of 2013. The study area is characterized by sloping, uneven terrain, and different types of vegetation including eucalyptus and conifer trees, as well as dense shrubs varying in heights from 0.3-1.5 meters. Initial snow depth measurements using the unfiltered point cloud measurements are characterized by large errors (~20-90 cm) due to the dense vegetation. Using filtering techniques instead of raw differencing improves the estimation of snow depth in

  16. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    NASA Astrophysics Data System (ADS)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  17. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates.

  18. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  19. An Automatic Algorithm for Minimizing Anomalies and Discrepancies in Point Clouds Acquired by Laser Scanning Technique

    NASA Astrophysics Data System (ADS)

    Bordin, Fabiane; Gonzaga, Luiz, Jr.; Galhardo Muller, Fabricio; Veronez, Mauricio Roberto; Scaioni, Marco

    2016-06-01

    Laser scanning technique from airborne and land platforms has been largely used for collecting 3D data in large volumes in the field of geosciences. Furthermore, the laser pulse intensity has been widely exploited to analyze and classify rocks and biomass, and for carbon storage estimation. In general, a laser beam is emitted, collides with targets and only a percentage of emitted beam returns according to intrinsic properties of each target. Also, due interferences and partial collisions, the laser return intensity can be incorrect, introducing serious errors in classification and/or estimation processes. To address this problem and avoid misclassification and estimation errors, we have proposed a new algorithm to correct return intensity for laser scanning sensors. Different case studies have been used to evaluate and validated proposed approach.

  20. A Unique Computational Algorithm to Simulate Probabilistic Multi-Factor Interaction Model Complex Material Point Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2010-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points--the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  1. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGES

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; ...

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  2. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Experimental Validation.

    PubMed

    Hyodo, Tomoko; Hori, Masatoshi; Lamb, Peter; Sasaki, Kosuke; Wakayama, Tetsuya; Chiba, Yasutaka; Mochizuki, Teruhito; Murakami, Takamichi

    2017-02-01

    Purpose To assess the ability of fast-kilovolt-peak switching dual-energy computed tomography (CT) by using the multimaterial decomposition (MMD) algorithm to quantify liver fat. Materials and Methods Fifteen syringes that contained various proportions of swine liver obtained from an abattoir, lard in food products, and iron (saccharated ferric oxide) were prepared. Approval of this study by the animal care and use committee was not required. Solid cylindrical phantoms that consisted of a polyurethane epoxy resin 20 and 30 cm in diameter that held the syringes were scanned with dual- and single-energy 64-section multidetector CT. CT attenuation on single-energy CT images (in Hounsfield units) and MMD-derived fat volume fraction (FVF; dual-energy CT FVF) were obtained for each syringe, as were magnetic resonance (MR) spectroscopy measurements by using a 1.5-T imager (fat fraction [FF] of MR spectroscopy). Reference values of FVF (FVFref) were determined by using the Soxhlet method. Iron concentrations were determined by inductively coupled plasma optical emission spectroscopy and divided into three ranges (0 mg per 100 g, 48.1-55.9 mg per 100 g, and 92.6-103.0 mg per 100 g). Statistical analysis included Spearman rank correlation and analysis of covariance. Results Both dual-energy CT FVF (ρ = 0.97; P < .001) and CT attenuation on single-energy CT images (ρ = -0.97; P < .001) correlated significantly with FVFref for phantoms without iron. Phantom size had a significant effect on dual-energy CT FVF after controlling for FVFref (P < .001). The regression slopes for CT attenuation on single-energy CT images in 20- and 30-cm-diameter phantoms differed significantly (P = .015). In sections with higher iron concentrations, the linear coefficients of dual-energy CT FVF decreased and those of MR spectroscopy FF increased (P < .001). Conclusion Dual-energy CT FVF allows for direct quantification of fat content in units of volume percent. Dual-energy CT FVF was larger in 30

  3. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  4. A novel multi-aperture based sun sensor based on a fast multi-point MEANSHIFT (FMMS) algorithm.

    PubMed

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels.

  5. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  6. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  7. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO), launched in 2010, is a NASA-designed spacecraft built to study the Sun. SDO has tight pointing requirements and instruments that are sensitive to spacecraft jitter. Two High Gain Antennas (HGAs) are used to continuously send science data to a dedicated ground station. Preflight analysis showed that jitter resulting from motion of the HGAs was a cause for concern. Three jitter mitigation techniques were developed and implemented to overcome effects of jitter from different sources. These mitigation techniques include: the random step delay, stagger stepping, and the No Step Request (NSR). During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft, in which various sources of jitter were examined to determine their level of effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. The jitter levels were compared with the gimbal jitter allocations for each instrument. The decision was made to consider implementing two of the jitter mitigating techniques on board the spacecraft: stagger stepping and the NSR. Flight data with and without jitter mitigation enabled was examined, and it is shown in this paper that HGA tracking is not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. The HGA-induced jitter on the instruments is well within the jitter requirement when the stagger step and NSR mitigation options are enabled.

  8. DHARMA - Discriminant hyperplane abstracting residuals minimization algorithm for separating clusters with fuzzy boundaries. [data points pattern recognition technique

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    Learning of discriminant hyperplanes in imperfectly supervised or unsupervised training sample sets with unreliably labeled samples along the fuzzy joint boundaries between sample clusters is discussed, with the discriminant hyperplane designed to be a least-squares fit to the unreliably labeled data points. (Samples along the fuzzy boundary jump back and forth from one cluster to the other in recursive cluster stabilization and are considered unreliably labeled.) Minimization of the distances of these unreliably labeled samples from the hyperplanes does not sacrifice the ability to discriminate between classes represented by reliably labeled subsets of samples. An equivalent unconstrained linear inequality problem is formulated and algorithms for its solution are indicated. Landsat earth sensing data were used in confirming the validity and computational feasibility of the approach, which should be useful in deriving discriminant hyperplanes separating clusters with fuzzy boundaries, given supervised training sample sets with unreliably labeled boundary samples.

  9. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  10. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p<0.01); and the correlation coefficient of tree crown volume between V(VC) derived from new method and V(C) by the formula of a regular body is 0.960 (p<0.001). The results also show that the average of V(C) is smaller than that of V(VC) at the rate of 8.03%, and the average of A4 is larger than that of A(V) at the rate of 25.5%. Assumed Av and V(VC) as ture values, the deviations of the new method could be attributed to irregularity of the crowns' silhouettes. Different morphological characteristics of tree crown led to measurement error in forest simple plot survey. Based on the results, the paper proposes that: (1) the use of eight-point or sixteen-point projection with

  11. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    SciTech Connect

    2012-05-31

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  12. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-05

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used.

  13. Simultaneous determination of free amino acid content in tea infusions by using high-performance liquid chromatography with fluorescence detection coupled with alternating penalty trilinear decomposition algorithm.

    PubMed

    Tan, Fuyuan; Tan, Chao; Zhao, Aiping; Li, Menglong

    2011-10-26

    In this paper, a novel application of alternating penalty trilinear decomposition (APTLD) for high-performance liquid chromatography with fluorescence detection (HPLC-FLD) has been developed to simultaneously determine the contents of free amino acids in tea. Although the spectra of amino acid derivatives were similar and a large number of water-soluble compounds are coextracted, APTLD could predict the accurate concentrations together with reasonable resolution of chromatographic and spectral profiles for the amino acids of interest owing to its "second-order advantage". An additional advantage of the proposed method is lower cost than traditional methods. The results indicate that it is an attractive alternative strategy for the routine resolution and quantification of amino acids in the presence of unknown interferences or when complete separation is not easily achieved.

  14. TRIANGLE-SHAPED DC CORONA DISCHARGE DEVICE FOR MOLECULAR DECOMPOSITION

    EPA Science Inventory

    The paper discusses the evaluation of electrostatic DC corona discharge devices for the application of molecular decomposition. A point-to-plane geometry corona device with a rectangular cross section demonstrated low decomposition efficiencies in earlier experimental work. The n...

  15. Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems

    SciTech Connect

    O'Leary, Dianne P.; Tits, Andre

    2014-04-03

    Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.

  16. New detection algorithm for dim point moving target in IR-image sequence based on an image frames transformation

    NASA Astrophysics Data System (ADS)

    Mohamed, M. A.; Li, Hongzuo

    2013-09-01

    In this paper we follow the concept of the track before detect (TBD) category in order to perform a simple, fast and adaptive detection and tracking processes of dim pixel size moving targets in IR images sequence. We present two new algorithms based on an image frames transformation, the first algorithm is a recursive algorithm to measure the image background Baseline which help in assigning an adaptive threshold, while the second is an adaptive recursive statistical spatio-temporal algorithm for detecting and tracking the target. The results of applying the proposed algorithms on a set of frames having a simple single pixel target performing a linear motion shows a high efficiency and validity in the detecting of the motion, and the measurement of the background baseline.

  17. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration

    PubMed Central

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods. PMID:26881433

  18. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    PubMed

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  19. Revisiting the layout decomposition problem for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Kahng, Andrew B.; Park, Chul-Hong; Xu, Xu; Yao, Hailong

    2008-10-01

    In double patterning lithography (DPL) layout decomposition for 45nm and below process nodes, two features must be assigned opposite colors (corresponding to different exposures) if their spacing is less than the minimum coloring spacing.5, 11, 14 However, there exist pattern configurations for which pattern features separated by less than the minimum coloring spacing cannot be assigned different colors. In such cases, DPL requires that a layout feature be split into two parts. We address this problem using a layout decomposition algorithm that incorporates integer linear programming (ILP), phase conflict detection (PCD), and node-deletion bipartization (NDB) methods. We evaluate our approach on both real-world and artificially generated testcases in 45nm technology. Experimental results show that our proposed layout decomposition method effectively decomposes given layouts to satisfy the key goals of minimized line-ends and maximized overlap margin. There are no design rule violations in the final decomposed layout. While we have previously reported other facets of our research on DPL pattern decomposition,6 the present paper differs from that work in the following key respects: (1) instead of detecting conflict cycles and splitting nodes in conflict cycles to achieve graph bipartization,6 we split all nodes of the conflict graph at all feasible dividing points and then formulate a problem of bipartization by ILP, PCD8 and NDB9 methods; and (2) instead of reporting unresolvable conflict cycles, we report the number of deleted conflict edges to more accurately capture the needed design changes in the experimental results.

  20. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore

  1. Fixed-point single-precision estimation. [Kalman filtering for NASA Standard Spacecraft Computer orbit determination algorithm

    NASA Technical Reports Server (NTRS)

    Thompson, E. H.; Farrell, J. L.

    1976-01-01

    Monte Carlo simulation of autonomous orbit determination has validated the use of an 18-bit NASA Standard Spacecraft Computer (NSSC) for the extended Kalman filter. Dimensionally consistent scales are chosen for all variables in the algorithm, such that nearly all of the onboard computation can be performed in single precision without matrix square root formulations. Allowable simplifications in algorithm implementation and practical means of ensuring convergence are verified for accuracies of a few km provided by star/vertical observations

  2. A double-loop structure in the adaptive generalized predictive control algorithm for control of robot end-point contact force.

    PubMed

    Wen, Shuhuan; Zhu, Jinghai; Li, Xiaoli; Chen, Shengyong

    2014-09-01

    Robot force control is an essential issue in robotic intelligence. There is much high uncertainty when robot end-effector contacts with the environment. Because of the environment stiffness effects on the system of the robot end-effector contact with environment, the adaptive generalized predictive control algorithm based on quantitative feedback theory is designed for robot end-point contact force system. The controller of the internal loop is designed on the foundation of QFT to control the uncertainty of the system. An adaptive GPC algorithm is used to design external loop controller to improve the performance and the robustness of the system. Two closed loops used in the design approach realize the system׳s performance and improve the robustness. The simulation results show that the algorithm of the robot end-effector contacting force control system is effective.

  3. Critical analysis of nitramine decomposition data: Activation energies and frequency factors for HMX and RDX decomposition

    NASA Technical Reports Server (NTRS)

    Schroeder, M. A.

    1980-01-01

    A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.

  4. The Complexity of Standing Postural Control in Older Adults: A Modified Detrended Fluctuation Analysis Based upon the Empirical Mode Decomposition Algorithm

    PubMed Central

    Liu, Dongdong; Hu, Kun; Zhang, Jue; Fang, Jing

    2013-01-01

    Human aging into senescence diminishes the capacity of the postural control system to adapt to the stressors of everyday life. Diminished adaptive capacity may be reflected by a loss of the fractal-like, multiscale complexity within the dynamics of standing postural sway (i.e., center-of-pressure, COP). We therefore studied the relationship between COP complexity and adaptive capacity in 22 older and 22 younger healthy adults. COP magnitude dynamics were assessed from raw data during quiet standing with eyes open and closed, and complexity was quantified with a new technique termed empirical mode decomposition embedded detrended fluctuation analysis (EMD-DFA). Adaptive capacity of the postural control system was assessed with the sharpened Romberg test. As compared to traditional DFA, EMD-DFA more accurately identified trends in COP data with intrinsic scales and produced short and long-term scaling exponents (i.e., αShort, αLong) with greater reliability. The fractal-like properties of COP fluctuations were time-scale dependent and highly complex (i.e., αShort values were close to one) over relatively short time scales. As compared to younger adults, older adults demonstrated lower short-term COP complexity (i.e., greater αShort values) in both visual conditions (p>0.001). Closing the eyes decreased short-term COP complexity, yet this decrease was greater in older compared to younger adults (p<0.001). In older adults, those with higher short-term COP complexity exhibited better adaptive capacity as quantified by Romberg test performance (r2 = 0.38, p<0.001). These results indicate that an age-related loss of COP complexity of magnitude series may reflect a clinically important reduction in postural control system functionality as a new biomarker. PMID:23650518

  5. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  6. A new damping factor algorithm based on line search of the local minimum point for inverse approach

    NASA Astrophysics Data System (ADS)

    Zhang, Yaqi; Liu, Weijie; Lu, Fang; Zhang, Xiangkui; Hu, Ping

    2013-05-01

    The influence of damping factor on the convergence and computational efficiency of the inverse approach was studied through a series of practical examples. A new selection algorithm of the damping (relaxation) factor which takes into account of both robustness and calculation efficiency is proposed, then the computer program is implemented and tested on Siemens PLM NX | One-Step. The result is compared with the traditional Armijo rule through six examples such as U-beam, square box and cylindrical cup et al, confirming the effectiveness of proposed algorithm.

  7. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  8. Domain Decomposition for the SPN Solver MINOS

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Baudron, Anne-Marie; Lautard, Jean-Jacques

    2012-12-01

    In this article we present a domain decomposition method for the mixed SPN equations, discretized with Raviart-Thomas-Nédélec finite elements. This domain decomposition is based on the iterative Schwarz algorithm with Robin interface conditions to handle communications. After having described this method, we give details on how to optimize the convergence. Finally, we give some numerical results computed in a realistic 3D domain. The computations are done with the MINOS solver of the APOLLO3® code.

  9. Tiling Models for Spatial Decomposition in AMTRAN

    SciTech Connect

    Compton, J C; Clouse, C J

    2005-05-27

    Effective spatial domain decomposition for discrete ordinate (S{sub n}) neutron transport calculations has been critical for exploiting massively parallel architectures typified by the ASCI White computer at Lawrence Livermore National Laboratory. A combination of geometrical and computational constraints has posed a unique challenge as problems have been scaled up to several thousand processors. Carefully scripted decomposition and corresponding execution algorithms have been developed to handle a range of geometrical and hardware configurations.

  10. Domain decomposition for the SPN solver MINOS

    SciTech Connect

    Jamelot, Erell; Baudron, Anne-Marie; Lautard, Jean-Jacques

    2012-07-01

    In this article we present a domain decomposition method for the mixed SPN equations, discretized with Raviart-Thomas-Nedelec finite elements. This domain decomposition is based on the iterative Schwarz algorithm with Robin interface conditions to handle communications. After having described this method, we give details on how to optimize the convergence. Finally, we give some numerical results computed in a realistic 3D domain. The computations are done with the MINOS solver of the APOLLO3 (R) code. (authors)

  11. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  12. An algorithm for approximating the L * invariant coordinate from the real-time tracing of one magnetic field line between mirror points

    NASA Astrophysics Data System (ADS)

    Lejosne, Solène

    2014-08-01

    The L * invariant coordinate depends on the global electromagnetic field topology at a given instance, and the standard method for its determination requires a computationally expensive drift contour tracing. This fact makes L * a cumbersome parameter to handle. In this paper, we provide new insights on the L * parameter, and we introduce an algorithm for an L * approximation that only requires the real-time tracing of one magnetic field line between mirrors points. This approximation is based on the description of the variation of the magnetic field mirror intensity after an adiabatic dipolarization, i.e., after the nondipolar components of a magnetic field have been turned off with a characteristic time very long in comparison with the particles' drift periods. The corresponding magnetic field topological variations are deduced, assuming that the field line foot points remain rooted in the Earth's surface, and the drift average operator is replaced with a computationally cheaper circular average operator. The algorithm results in a relative difference of a maximum of 12% between the approximate L * and the output obtained using the International Radiation Belt Environment Modeling library, in the case of the Tsyganenko 89 model for the external magnetic field (T89). This margin of error is similar to the margin of error due to small deviations between different magnetic field models at geostationary orbit. This approximate L * algorithm represents therefore a reasonable compromise between computational speed and accuracy of particular interest for real-time space weather forecast purposes.

  13. [Prenatal risk calculation: comparison between Fast Screen pre I plus software and ViewPoint software. Evaluation of the risk calculation algorithms].

    PubMed

    Morin, Jean-François; Botton, Eléonore; Jacquemard, François; Richard-Gireme, Anouk

    2013-01-01

    The Fetal medicine foundation (FMF) has developed a new algorithm called Prenatal Risk Calculation (PRC) to evaluate Down syndrome screening based on free hCGβ, PAPP-A and nuchal translucency. The peculiarity of this algorithm is to use the degree of extremeness (DoE) instead of the multiple of the median (MoM). The biologists measuring maternal seric markers on Kryptor™ machines (Thermo Fisher Scientific) use Fast Screen pre I plus software for the prenatal risk calculation. This software integrates the PRC algorithm. Our study evaluates the data of 2.092 patient files of which 19 show a fœtal abnormality. These files have been first evaluated with the ViewPoint software based on MoM. The link between DoE and MoM has been analyzed and the different calculated risks compared. The study shows that Fast Screen pre I plus software gives the same risk results as ViewPoint software, but yields significantly fewer false positive results.

  14. Quantitative analysis of triazine herbicides in environmental samples by using high performance liquid chromatography and diode array detection combined with second-order calibration based on an alternating penalty trilinear decomposition algorithm.

    PubMed

    Li, Yuan-Na; Wu, Hai-Long; Qing, Xiang-Dong; Li, Quan; Li, Shu-Fang; Fu, Hai-Yan; Yu, Yong-Jie; Yu, Ru-Qin

    2010-09-23

    A novel application of second-order calibration method based on an alternating penalty trilinear decomposition (APTLD) algorithm is presented to treat the data from high performance liquid chromatography-diode array detection (HPLC-DAD). The method makes it possible to accurately and reliably analyze atrazine (ATR), ametryn (AME) and prometryne (PRO) contents in soil, river sediment and wastewater samples. Satisfactory results are obtained although the elution and spectral profiles of the analytes are heavily overlapped with the background in environmental samples. The obtained average recoveries for ATR, AME and PRO are 99.7±1.5, 98.4±4.7 and 97.0±4.4% in soil samples, 100.1±3.2, 100.7±3.4 and 96.4±3.8% in river sediment samples, and 100.1±3.5, 101.8±4.2 and 101.4±3.6% in wastewater samples, respectively. Furthermore, the accuracy and precision of the proposed method are evaluated with the elliptical joint confidence region (EJCR) test. It lights a new avenue to determine quantitatively herbicides in environmental samples with a simple pretreatment procedure and provides the scientific basis for an improved environment management through a better understanding of the wastewater-soil-river sediment system as a whole.

  15. Adaptive neuro-fuzzy inference system multi-objective optimization using the genetic algorithm/singular value decomposition method for modelling the discharge coefficient in rectangular sharp-crested side weirs

    NASA Astrophysics Data System (ADS)

    Khoshbin, Fatemeh; Bonakdari, Hossein; Hamed Ashraf Talesh, Seyed; Ebtehaj, Isa; Zaji, Amir Hossein; Azimi, Hamed

    2016-06-01

    In the present article, the adaptive neuro-fuzzy inference system (ANFIS) is employed to model the discharge coefficient in rectangular sharp-crested side weirs. The genetic algorithm (GA) is used for the optimum selection of membership functions, while the singular value decomposition (SVD) method helps in computing the linear parameters of the ANFIS results section (GA/SVD-ANFIS). The effect of each dimensionless parameter on discharge coefficient prediction is examined in five different models to conduct sensitivity analysis by applying the above-mentioned dimensionless parameters. Two different sets of experimental data are utilized to examine the models and obtain the best model. The study results indicate that the model designed through GA/SVD-ANFIS predicts the discharge coefficient with a good level of accuracy (mean absolute percentage error = 3.362 and root mean square error = 0.027). Moreover, comparing this method with existing equations and the multi-layer perceptron-artificial neural network (MLP-ANN) indicates that the GA/SVD-ANFIS method has superior performance in simulating the discharge coefficient of side weirs.

  16. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

    NASA Astrophysics Data System (ADS)

    Siegel, J.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!

  17. Parameter identification for continuous point emission source based on Tikhonov regularization method coupled with particle swarm optimization algorithm.

    PubMed

    Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun

    2017-03-05

    In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission

  18. Maximum power point tracking algorithm based on sliding mode and fuzzy logic for photovoltaic sources under variable environmental conditions

    NASA Astrophysics Data System (ADS)

    Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.

    2017-02-01

    Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.

  19. Algorithms for Collision Detection Between a Point and a Moving Polygon, with Applications to Aircraft Weather Avoidance

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Hagen, George

    2016-01-01

    This paper proposes mathematical definitions of functions that can be used to detect future collisions between a point and a moving polygon. The intended application is weather avoidance, where the given point represents an aircraft and bounding polygons are chosen to model regions with bad weather. Other applications could possibly include avoiding other moving obstacles. The motivation for the functions presented here is safety, and therefore they have been proved to be mathematically correct. The functions are being developed for inclusion in NASA's Stratway software tool, which allows low-fidelity air traffic management concepts to be easily prototyped and quickly tested.

  20. Frequency-domain endoscopic diffuse optical tomography reconstruction algorithm based on dual-modulation-frequency and dual-points source diffuse equation

    NASA Astrophysics Data System (ADS)

    Qin, Zhuanping; Hou, Qiang; Zhao, Huijuan; Yang, Yanshuang; Zhou, Xiaoqing; Gao, Feng

    2013-03-01

    In this paper, frequency-domain endoscopic diffuse optical tomography image reconstruction algorithm based on dual-modulation-frequency and dual-points source diffuse equation is investigated for the reconstruction of the optical parameters including the absorption and reducing scattering coefficients. The forward problem is solved by the finite element method based on the frequency domain diffuse equation (FD-DE) for dual-points source approximation and multi-modulation-frequency. In the image reconstruction, a multi-modulation-frequency Newton-Raphson algorithm is applied to obtain the solution. To further improve the image accuracy and quality, a method based on the region of interest (ROI) is applied on the above procedures. The simulation is performed in the tubular model to verify the validity of the algorithm. Results show that the FD-DE with dual-points source approximate is more accuracy at shorter source-detector separation. The reconstruction with dual-modulation-frequency improves the image accuracy and quality compared to the results with single-modulation-frequency and triple-modulation-frequency method. The peak optical coefficients in ROI (ROI_max) are almost equivalent to the true optical coefficients with the relative error less than 6.67%. The full width at half maximum (FWHM) achieves 82% of the true radius. The contrast-to-noise ratio (CNR) and image coefficient(IC) is 5.678 and 26.962, respectively. Additionally, the results with the method based on ROI show that the ROI_max is equivalent to the true value. The FWHM can improve by 88% of the true radius. The CNR and IC is improved over 7.782 and 45.335, respectively.

  1. Improvement of registration accuracy in accelerated partial breast irradiation using the point-based rigid-body registration algorithm for patients with implanted fiducial markers

    SciTech Connect

    Inoue, Minoru; Yoshimura, Michio Sato, Sayaka; Nakamura, Mitsuhiro; Yamada, Masahiro; Hirata, Kimiko; Ogura, Masakazu; Hiraoka, Masahiro; Sasaki, Makoto; Fujimoto, Takahiro

    2015-04-15

    Purpose: To investigate image-registration errors when using fiducial markers with a manual method and the point-based rigid-body registration (PRBR) algorithm in accelerated partial breast irradiation (APBI) patients, with accompanying fiducial deviations. Methods: Twenty-two consecutive patients were enrolled in a prospective trial examining 10-fraction APBI. Titanium clips were implanted intraoperatively around the seroma in all patients. For image-registration, the positions of the clips in daily kV x-ray images were matched to those in the planning digitally reconstructed radiographs. Fiducial and gravity registration errors (FREs and GREs, respectively), representing resulting misalignments of the edge and center of the target, respectively, were compared between the manual and algorithm-based methods. Results: In total, 218 fractions were evaluated. Although the mean FRE/GRE values for the manual and algorithm-based methods were within 3 mm (2.3/1.7 and 1.3/0.4 mm, respectively), the percentages of fractions where FRE/GRE exceeded 3 mm using the manual and algorithm-based methods were 18.8%/7.3% and 0%/0%, respectively. Manual registration resulted in 18.6% of patients with fractions of FRE/GRE exceeding 5 mm. The patients with larger clip deviation had significantly more fractions showing large FRE/GRE using manual registration. Conclusions: For image-registration using fiducial markers in APBI, the manual registration results in more fractions with considerable registration error due to loss of fiducial objectivity resulting from their deviation. The authors recommend the PRBR algorithm as a safe and effective strategy for accurate, image-guided registration and PTV margin reduction.

  2. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  3. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  4. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  5. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; O'Brien, Ricky T.; Rugaard Poulsen, Per; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  6. Chemometrics-enhanced high performance liquid chromatography-diode array detection strategy for simultaneous determination of eight co-eluted compounds in ten kinds of Chinese teas using second-order calibration method based on alternating trilinear decomposition algorithm.

    PubMed

    Yin, Xiao-Li; Wu, Hai-Long; Gu, Hui-Wen; Zhang, Xiao-Hua; Sun, Yan-Mei; Hu, Yong; Liu, Lu; Rong, Qi-Ming; Yu, Ru-Qin

    2014-10-17

    In this work, an attractive chemometrics-enhanced high performance liquid chromatography-diode array detection (HPLC-DAD) strategy was proposed for simultaneous and fast determination of eight co-eluted compounds including gallic acid, caffeine and six catechins in ten kinds of Chinese teas by using second-order calibration method based on alternating trilinear decomposition (ATLD) algorithm. This new strategy proved to be a useful tool for handling the co-eluted peaks, uncalibrated interferences and baseline drifts existing in the process of chromatographic separation, which benefited from the "second-order advantages", making the determination of gallic acid, caffeine and six catechins in tea infusions within 8 min under a simple mobile phase condition. The average recoveries of the analytes on two selected tea samples ranged from 91.7 to 103.1% with standard deviations (SD) ranged from 1.9 to 11.9%. Figures of merit including sensitivity (SEN), selectivity (SEL), root-mean-square error of prediction (RMSEP) and limit of detection (LOD) have been calculated to validate the accuracy of the proposed method. To further confirm the reliability of the method, a multiple reaction monitoring (MRM) method based on LC-MS/MS was employed for comparison and the obtained results of both methods were consistent with each other. Furthermore, as a universal strategy, this new proposed analytical method was applied for the determination of gallic acid, caffeine and catechins in several other kinds of Chinese teas, including different levels and varieties. Finally, based on the quantitative results, principal component analysis (PCA) was used to conduct a cluster analysis for these Chinese teas. The green tea, Oolong tea and Pu-erh raw tea samples were classified successfully. All results demonstrated that the proposed method is accurate, sensitive, fast, universal and ideal for the rapid, routine analysis and discrimination of gallic acid, caffeine and catechins in Chinese tea

  7. A new eddy-covariance method using empirical mode decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We introduce a new eddy-covariance method that uses a spectral decomposition algorithm called empirical mode decomposition. The technique is able to calculate contributions to near-surface fluxes from different periodic components. Unlike traditional Fourier methods, this method allows for non-ortho...

  8. 3D shape decomposition and comparison for gallbladder modeling

    NASA Astrophysics Data System (ADS)

    Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen

    2011-03-01

    This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.

  9. AUTONOMOUS GAUSSIAN DECOMPOSITION

    SciTech Connect

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Dickey, John

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  10. Autonomous Gaussian Decomposition

    NASA Astrophysics Data System (ADS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-04-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  11. Error reduction in EMG signal decomposition.

    PubMed

    Kline, Joshua C; De Luca, Carlo J

    2014-12-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization.

  12. Retrieval of Knowledge through Algorithmic Decomposition

    DTIC Science & Technology

    1990-06-01

    regulatory requirements. It has been given no primary distribution other than to DTIC and will be available only through DTIC or the National Technical...Effective querying of the system in the latter case requires a careful structuring of the user’s information requirements, the absence of which can lead...intuitively divine an estimate that seems reasonable in light of whatever knowledge comes to mind. This wholistic approach to estimation relies

  13. Application of the nonlinear time series prediction method of genetic algorithm for forecasting surface wind of point station in the South China Sea with scatterometer observations

    NASA Astrophysics Data System (ADS)

    Zhong, Jian; Dong, Gang; Sun, Yimei; Zhang, Zhaoyang; Wu, Yuqin

    2016-11-01

    The present work reports the development of nonlinear time series prediction method of genetic algorithm (GA) with singular spectrum analysis (SSA) for forecasting the surface wind of a point station in the South China Sea (SCS) with scatterometer observations. Before the nonlinear technique GA is used for forecasting the time series of surface wind, the SSA is applied to reduce the noise. The surface wind speed and surface wind components from scatterometer observations at three locations in the SCS have been used to develop and test the technique. The predictions have been compared with persistence forecasts in terms of root mean square error. The predicted surface wind with GA and SSA made up to four days (longer for some point station) in advance have been found to be significantly superior to those made by persistence model. This method can serve as a cost-effective alternate prediction technique for forecasting surface wind of a point station in the SCS basin. Project supported by the National Natural Science Foundation of China (Grant Nos. 41230421 and 41605075) and the National Basic Research Program of China (Grant No. 2013CB430101).

  14. Interpolatory fixed-point algorithm for an efficient computation of TE and TM modes in arbitrary 1D structures at oblique incidence

    NASA Astrophysics Data System (ADS)

    Pérez Molina, Manuel; Francés Monllor, Jorge; Álvarez López, Mariela; Neipp López, Cristian; Carretero López, Luis

    2010-05-01

    We develop the Interpolatory Fixed-Point Algorithm (IFPA) to compute efficiently the TE and TM reflectance and transmittance coefficients for arbitrary 1D structures at oblique incidence. For this purpose, we demonstrate that the semi-analytical solutions of the Helmholtz equation provided by the fixed-point method have a polynomial dependence on variables that are related to the essential electromagnetic parameters -incidence angle and wavelength-, which allows a drastic simplification of the required calculations taking the advantage of interpolation for a few parameter values. The first step to develop the IFPA consists of stating the Helmholtz equation and boundary conditions for TE and TM plane incident waves on a 1D finite slab with an arbitrary permittivity profile surrounded by two homogeneous media. The Helmholtz equation and boundary conditions are then transformed into a second-order initial value problem which is written in terms of transfer matrices. By applying the fixed-point method, the coefficients of such transfer matrices are obtained as polynomials on several variables that can be characterized by a reduced set of interpolating parameters. We apply the IFPA to specific examples of 1D diffraction gratings, optical rugate filters and quasi-periodic structures, for which precise solutions for the TE and TM modes are efficiently obtained by computing less than 20 interpolating parameters.

  15. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  16. Image encryption using P-Fibonacci transform and decomposition

    NASA Astrophysics Data System (ADS)

    Zhou, Yicong; Panetta, Karen; Agaian, Sos; Chen, C. L. Philip

    2012-03-01

    Image encryption is an effective method to protect images or videos by transferring them into unrecognizable formats for different security purposes. To improve the security level of bit-plane decomposition based encryption approaches, this paper introduces a new image encryption algorithm by using a combination of parametric bit-plane decomposition along with bit-plane shuffling and resizing, pixel scrambling and data mapping. The algorithm utilizes the Fibonacci P-code for image bit-plane decomposition and the 2D P-Fibonacci transform for image encryption because they are parameter dependent. Any new or existing method can be used for shuffling the order of the bit-planes. Simulation analysis and comparisons are provided to demonstrate the algorithm's performance for image encryption. Security analysis shows the algorithm's ability against several common attacks. The algorithm can be used to encrypt images, biometrics and videos.

  17. Decomposition of Sodium Tetraphenylborate

    SciTech Connect

    Barnes, M.J.

    1998-11-20

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability.

  18. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  19. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  20. Anisotropic finite strain viscoelasticity based on the Sidoroff multiplicative decomposition and logarithmic strains

    NASA Astrophysics Data System (ADS)

    Latorre, Marcos; Montáns, Francisco Javier

    2015-09-01

    In this paper a purely phenomenological formulation and finite element numerical implementation for quasi-incompressible transversely isotropic and orthotropic materials is presented. The stored energy is composed of distinct anisotropic equilibrated and non-equilibrated parts. The nonequilibrated strains are obtained from the multiplicative decomposition of the deformation gradient. The procedure can be considered as an extension of the Reese and Govindjee framework to anisotropic materials and reduces to such formulation for isotropic materials. The stress-point algorithmic implementation is based on an elastic-predictor viscous-corrector algorithm similar to that employed in plasticity. The consistent tangent moduli for the general anisotropic case are also derived. Numerical examples explain the procedure to obtain the material parameters, show the quadratic convergence of the algorithm and usefulness in multiaxial loading. One example also highlights the importance of prescribing a complete set of stress-strain curves in orthotropic materials.

  1. Analysis and Application of LIDAR Waveform Data Using a Progressive Waveform Decomposition Method

    NASA Astrophysics Data System (ADS)

    Zhu, J.; Zhang, Z.; Hu, X.; Li, Z.

    2011-09-01

    Due to rich information of a full waveform of airborne LiDAR (light detection and ranging) data, the analysis of full waveform has been an active area in LiDAR application. It is possible to digitally sample and store the entire reflected waveform of small-footprint instead of only discrete point clouds. Decomposition of waveform data, a key step in waveform data analysis, can be categorized to two typical methods: 1) the Gaussian modelling method such as the Non-linear least-squares (NLS) algorithm and the maximum likelihood estimation using the Exception Maximization (EM) algorithm. 2) pulse detection method—Average Square Difference Function (ASDF). However, the Gaussian modelling methods strongly rely on initial parameters, whereas the ASDF omits the importance of parameter information of the waveform. In this paper, we proposed a fast algorithm—Progressive Waveform Decomposition (PWD) method to extract local maxims and fit the echo with Gaussian function, and calculate other parameters from the raw waveform data. On the one hand, experiments are implemented to evaluate the PWD method and the results demonstrate its robustness and efficiency. On the other hand, with the PWD parametric analysis of the full-waveform instead of a 3D point cloud, some special applications are investigated afterward.

  2. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart.

  3. Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts.

    PubMed

    Pontifex, Matthew B; Gwizdala, Kathryn L; Parks, Andrew C; Billinger, Martin; Brunner, Clemens

    2017-03-01

    Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies.

  4. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  5. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  6. Dominant modal decomposition method

    NASA Astrophysics Data System (ADS)

    Dombovari, Zoltan

    2017-03-01

    The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.

  7. Comments on the "Meshless Helmholtz-Hodge decomposition".

    PubMed

    Bhatia, Harsh; Norgard, Gregory; Pascucci, Valerio; Bremer, Peer-Timo

    2013-03-01

    The Helmholtz-Hodge decomposition (HHD) is one of the fundamental theorems of fluids describing the decomposition of a flow field into its divergence-free, curl-free, and harmonic components. Solving for the HHD is intimately connected to the choice of boundary conditions which determine the uniqueness and orthogonality of the decomposition. This article points out that one of the boundary conditions used in a recent paper "Meshless Helmholtz-Hodge Decomposition" is, in general, invalid and provides an analytical example demonstrating the problem. We hope that this clarification on the theory will foster further research in this area and prevent undue problems in applying and extending the original approach.

  8. Domain decomposition for implicit solvation models.

    PubMed

    Cancès, Eric; Maday, Yvon; Stamm, Benjamin

    2013-08-07

    This article is the first of a series of papers dealing with domain decomposition algorithms for implicit solvent models. We show that, in the framework of the COSMO model, with van der Waals molecular cavities and classical charge distributions, the electrostatic energy contribution to the solvation energy, usually computed by solving an integral equation on the whole surface of the molecular cavity, can be computed more efficiently by using an integral equation formulation of Schwarz's domain decomposition method for boundary value problems. In addition, the so-obtained potential energy surface is smooth, which is a critical property to perform geometry optimization and molecular dynamics simulations. The purpose of this first article is to detail the methodology, set up the theoretical foundations of the approach, and study the accuracies and convergence rates of the resulting algorithms. The full efficiency of the method and its applicability to large molecular systems of biological interest is demonstrated elsewhere.

  9. Point set registration: coherent point drift.

    PubMed

    Myronenko, Andriy; Song, Xubo

    2010-12-01

    Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.

  10. Application of modified Martinez-Silva algorithm in determination of net cover

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Łukasz; Grobelna, Iwona

    2016-12-01

    In the article we present the idea of modifications of Martinez-Silva algorithm, which allows for determination of place invariants (p-invariants) of Petri net. Their generation time is important in the parallel decomposition of discrete systems described by Petri nets. Decomposition process is essential from the point of view of discrete system design, as it allows for separation of smaller sequential parts. The proposed modifications of Martinez-Silva method concern the net cover by p-invariants and are focused on two important issues: cyclic reduction of invariant matrix and cyclic checking of net cover.

  11. Splitting algorithms for the wavelet transform of first-degree splines on nonuniform grids

    NASA Astrophysics Data System (ADS)

    Shumilov, B. M.

    2016-07-01

    For the splines of first degree with nonuniform knots, a new type of wavelets with a biased support is proposed. Using splitting with respect to the even and odd knots, a new wavelet decomposition algorithm in the form of the solution of a three-diagonal system of linear algebraic equations with respect to the wavelet coefficients is proposed. The application of the proposed implicit scheme to the point prediction of time series is investigated for the first time. Results of numerical experiments on the prediction accuracy and the compression of spline wavelet decompositions are presented.

  12. Parallel CE/SE Computations via Domain Decomposition

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung

    2000-01-01

    This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.

  13. Decomposition of small-footprint full waveform LiDAR data based on generalized Gaussian model and grouping LM optimization

    NASA Astrophysics Data System (ADS)

    Ma, Hongchao; Zhou, Weiwei; Zhang, Liang; Wang, Suyuan

    2017-04-01

    Full waveform airborne Light Detection And Ranging(LiDAR) data contains abundant information which may overcome some deficiencies of discrete LiDAR point cloud data provided by conventional LiDAR systems. Processing full waveform data to extract more information than coordinate values alone is of great significance for potential applications. The Levenberg–Marquardt (LM) algorithm is a traditional method used to estimate parameters of a Gaussian model when Gaussian decomposition of full waveform LiDAR data is performed. This paper employs the generalized Gaussian mixture function to fit a waveform, and proposes using the grouping LM algorithm to optimize the parameters of the function. It is shown that the grouping LM algorithm overcomes the common drawbacks which arise from the conventional LM for parameter optimization, such as the final results being influenced by the initial parameters, possible algorithm interruption caused by non-numerical elements that occurred in the Jacobian matrix, etc. The precision of the point cloud generated by the grouping LM is evaluated by comparing it with those provided by the LiDAR system and those generated by the conventional LM. Results from both simulation and real data show that the proposed algorithm can generate a higher-quality point cloud, in terms of point density and precision, and can extract other information, such as echo location, pulse width, etc., more precisely as well.

  14. Analyzing algorithms for nonlinear and spatially nonuniform phase shifts in the liquid crystal point diffraction interferometer. 1998 summer research program for high school juniors at the University of Rochester`s Laboratory for Laser Energetics: Student research reports

    SciTech Connect

    Jain, N.

    1999-03-01

    Phase-shifting interferometry has many advantages, and the phase shifting nature of the Liquid Crystal Point Diffraction Interferometer (LCPDI) promises to provide significant improvement over other current OMEGA wavefront sensors. However, while phase-shifting capabilities improve its accuracy as an interferometer, phase-shifting itself introduces errors. Phase-shifting algorithms are designed to eliminate certain types of phase-shift errors, and it is important to chose an algorithm that is best suited for use with the LCPDI. Using polarization microscopy, the authors have observed a correlation between LC alignment around the microsphere and fringe behavior. After designing a procedure to compare phase-shifting algorithms, they were able to predict the accuracy of two particular algorithms through computer modeling of device-specific phase shift-errors.

  15. Feature based volume decomposition for automatic hexahedral mesh generation

    SciTech Connect

    LU,YONG; GADH,RAJIT; TAUTGES,TIMOTHY J.

    2000-02-21

    Much progress has been made through these years to achieve automatic hexahedral mesh generation. While general meshing algorithms that can take on general geometry are not there yet; many well-proven automatic meshing algorithms now work on certain classes of geometry. This paper presents a feature based volume decomposition approach for automatic Hexahedral Mesh generation. In this approach, feature recognition techniques are introduced to determine decomposition features from a CAD model. The features are then decomposed and mapped with appropriate automatic meshing algorithms suitable for the correspondent geometry. Thus a formerly unmeshable CAD model may become meshable. The procedure of feature decomposition is recursive: sub-models are further decomposed until either they are matched with appropriate meshing algorithms or no more decomposition features are detected. The feature recognition methods employed are convexity based and use topology and geometry information, which is generally available in BREP solid models. The operations of volume decomposition are also detailed in the paper. The final section, the capability of the feature decomposer is demonstrated over some complicated manufactured parts.

  16. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this

  17. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  18. Hydrazine decomposition and other reactions

    NASA Technical Reports Server (NTRS)

    Armstrong, Warren E. (Inventor); La France, Donald S. (Inventor); Voge, Hervey H. (Inventor)

    1978-01-01

    This invention relates to the catalytic decomposition of hydrazine, catalysts useful for this decomposition and other reactions, and to reactions in hydrogen atmospheres generally using carbon-containing catalysts.

  19. Volume Decomposition and Feature Recognition for Hexahedral Mesh Generation

    SciTech Connect

    GADH,RAJIT; LU,YONG; TAUTGES,TIMOTHY J.

    1999-09-27

    Considerable progress has been made on automatic hexahedral mesh generation in recent years. Several automatic meshing algorithms have proven to be very reliable on certain classes of geometry. While it is always worth pursuing general algorithms viable on more general geometry, a combination of the well-established algorithms is ready to take on classes of complicated geometry. By partitioning the entire geometry into meshable pieces matched with appropriate meshing algorithm the original geometry becomes meshable and may achieve better mesh quality. Each meshable portion is recognized as a meshing feature. This paper, which is a part of the feature based meshing methodology, presents the work on shape recognition and volume decomposition to automatically decompose a CAD model into meshable volumes. There are four phases in this approach: (1) Feature Determination to extinct decomposition features, (2) Cutting Surfaces Generation to form the ''tailored'' cutting surfaces, (3) Body Decomposition to get the imprinted volumes; and (4) Meshing Algorithm Assignment to match volumes decomposed with appropriate meshing algorithms. The feature determination procedure is based on the CLoop feature recognition algorithm that is extended to be more general. Results are demonstrated over several parts with complicated topology and geometry.

  20. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  1. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  2. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  3. Embedding color watermarks in color images based on Schur decomposition

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a blind dual color image watermarking scheme based on Schur decomposition is introduced. This is the first time to use Schur decomposition to embed color image watermark in color host image, which is different from using the binary image as watermark. By analyzing the 4 × 4 unitary matrix U via Schur decomposition, we can find that there is a strong correlation between the second row first column element and the third row first column element. This property can be explored for embedding watermark and extracting watermark in the blind manner. Since Schur decomposition is an intermediate step in SVD decomposition, the proposed method requires less number of computations. Experimental results show that the proposed scheme is robust against most common attacks including JPEG lossy compression, JPEG 2000 compression, low-pass filtering, cropping, noise addition, blurring, rotation, scaling and sharpening et al. Moreover, the proposed algorithm outperforms the closely related SVD-based algorithm and the spatial-domain algorithm.

  4. Yield-aware decomposition for LELE double patterning

    NASA Astrophysics Data System (ADS)

    Kohira, Yukihide; Yokoyama, Yoko; Kodama, Chikaaki; Takahashi, Atsushi; Nojima, Shigeki; Tanaka, Satoshi

    2014-03-01

    In this paper, we propose a fast layout decomposition algorithm in litho-etch-litho-etch (LELE) type double patterning considering the yield. Our proposed algorithm extracts stitch candidates properly from complex layouts including various patterns, line widths and pitches. The planarity of the conflict graph and independence of stitch-candidates are utilized to obtain a layout decomposition with minimum cost efficiently for higher yield. The validity of our proposed algorithm is confirmed by using benchmark layout patterns used in literatures as well as layout patterns generated to fit the target manufacturing technologies as much as possible. In our experiments, our proposed algorithm is 7.7 times faster than an existing method on average.

  5. Minimax eigenvector decomposition for data hiding

    NASA Astrophysics Data System (ADS)

    Davidson, Jennifer

    2005-09-01

    Steganography is the study of hiding information within a covert channel in order to transmit a secret message. Any public media such as image data, audio data, or even file packets, can be used as a covert channel. This paper presents an embedding algorithm that hides a message in an image using a technique based on a nonlinear matrix transform called the minimax eigenvector decomposition (MED). The MED is a minimax algebra version of the well-known singular value decomposition (SVD). Minimax algebra is a matrix algebra based on the algebraic operations of maximum and addition, developed initially for use in operations research and extended later to represent a class of nonlinear image processing operations. The discrete mathematical morphology operations of dilation and erosion, for example, are contained within minimax algebra. The MED is much quicker to compute than the SVD and avoids the numerical computational issues of the SVD because the operations involved only integer addition, subtraction, and compare. We present the algorithm to embed data using the MED, show examples applied to image data, and discuss limitations and advantages as compared with another similar algorithm.

  6. Composite structured mesh generation with automatic domain decomposition in complex geometries

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents a novel automatic domain decomposition method to generate quality composite structured meshes in complex domains with arbitrary shapes, in which quality structured mesh generation still remains a challenge. The proposed decomposition algorithm is based on the analysis of an initi...

  7. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  8. Problem decomposition and domain-based parallelism via group theoretic principles

    SciTech Connect

    Makai, M.; Orechwa, Y.

    1997-10-01

    A systematic approach based on group theoretic principles, is presented for the decomposition of the solution algorithm of boundary value problems specified over symmetric domains, which is amenable to implementation for parallel computation. The principles are applied to the linear transport equation in general, and the decomposition is demonstrated for a square node in particular.

  9. Detailed Chemical Kinetic Modeling of Hydrazine Decomposition

    NASA Technical Reports Server (NTRS)

    Meagher, Nancy E.; Bates, Kami R.

    2000-01-01

    The purpose of this research project is to develop and validate a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. Hydrazine is used extensively in aerospace propulsion, and although liquid hydrazine is not considered detonable, many fuel handling systems create multiphase mixtures of fuels and fuel vapors during their operation. Therefore, a thorough knowledge of the decomposition chemistry of hydrazine under a variety of conditions can be of value in assessing potential operational hazards in hydrazine fuel systems. To gain such knowledge, a reasonable starting point is the development and validation of a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. A reasonably complete mechanism was published in 1996, however, many of the elementary steps included had outdated rate expressions and a thorough investigation of the behavior of the mechanism under a variety of conditions was not presented. The current work has included substantial revision of the previously published mechanism, along with a more extensive examination of the decomposition behavior of hydrazine. An attempt to validate the mechanism against the limited experimental data available has been made and was moderately successful. Further computational and experimental research into the chemistry of this fuel needs to be completed.

  10. Fast algorithm of byte-to-byte wavelet transform for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Sossa Azuela, Juan H.; Ramirez, Pablo M.

    2002-11-01

    A new fast algorithm of 2D DWT transform is presented. The algorithm operates on byte represented images and performs image transformation with the Cohen-Daubechies-Feauveau wavelet of the second order. It uses the lifting scheme for the calculations. The proposed algorithm is based on the "checkerboard" computation scheme for non-separable 2D wavelet. The problem of data extension near the image borders is resolved computing 1D Haar wavelet in the vicinity of the borders. With the checkerboard splitting, at each level of decomposition only one detail image is produced that simplify the further analysis for data compression. The calculations are rather simple, without any floating point operation allowing the implementation of the designed algorithm in fixed point DSP processors for fast, near real time processing. The proposed algorithm does not possesses perfect restoration of the processed data because of rounding that is introduced at each level of decomposition/restoration to perform operations with byte represented data. The designed algorithm was tested on different images. The criterion to estimate quantitatively the quality of the restored images was the well known PSNR. For the visual quality estimation the error maps between original and restored images were calculated. The obtained simulation results show that the visual and quantitative quality of the restored images is degraded with number of decomposition level increasing but is sufficiently high even after 6 levels. The introduced distortion are concentrated in the vicinity of high spatial activity details and are absent in the homogeneous regions. The designed algorithm can be used for image lossy compression and in noise suppression applications.

  11. Full-waveform LiDAR echo decomposition based on wavelet decomposition and particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Li, Duan; Xu, Lijun; Li, Xiaolu

    2017-04-01

    To measure the distances and properties of the objects within a laser footprint, a decomposition method for full-waveform light detection and ranging (LiDAR) echoes is proposed. In this method, firstly, wavelet decomposition is used to filter the noise and estimate the noise level in a full-waveform echo. Secondly, peak and inflection points of the filtered full-waveform echo are used to detect the echo components in the filtered full-waveform echo. Lastly, particle swarm optimization (PSO) is used to remove the noise-caused echo components and optimize the parameters of the most probable echo components. Simulation results show that the wavelet-decomposition-based filter is of the best improvement of SNR and decomposition success rates than Wiener and Gaussian smoothing filters. In addition, the noise level estimated using wavelet-decomposition-based filter is more accurate than those estimated using other two commonly used methods. Experiments were carried out to evaluate the proposed method that was compared with our previous method (called GS-LM for short). In experiments, a lab-build full-waveform LiDAR system was utilized to provide eight types of full-waveform echoes scattered from three objects at different distances. Experimental results show that the proposed method has higher success rates for decomposition of full-waveform echoes and more accurate parameters estimation for echo components than those of GS-LM. The proposed method based on wavelet decomposition and PSO is valid to decompose the more complicated full-waveform echoes for estimating the multi-level distances of the objects and measuring the properties of the objects in a laser footprint.

  12. Iterative image-domain decomposition for dual-energy CT

    SciTech Connect

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the

  13. Single-channel and multi-channel orthogonal matching pursuit for seismic trace decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Xuan; Zhang, Xuebing; Liu, Cai; Lu, Qi

    2017-02-01

    The conventional matching pursuit (MP) algorithm can decompose a 1D signal into a set of wavelet atoms adaptively. As to reflection seismic data, some applicable algorithms based on the MP decomposition has been developed, such as single-channel matching pursuit (SCMP) and multi-channel matching pursuit (MCMP). However, these algorithms cannot always select the optimal atoms, which results in less meaningful decompositions. To overcome this limitation, we introduce the idea of orthogonal matching pursuit into a multi-channel decomposition scheme, which we refer to as the multi-channel orthogonal matching pursuit (MCOMP). Each iteration of the proposed MCOMP might extract a more reasonable atom among a redundant Morlet wavelet dictionary, like the MCMP decomposition does, and estimate the corresponding amplitude more accurately by solving a least-squares problem. In order to correspond to SCMP, we also simplified the MCOMP decomposition to single-channel orthogonal matching pursuit (SCOMP) for decompositions of an individual seismic trace. We tested the proposed SCOMP algorithm on a synthetic signal and a field seismic trace. Then a field marine dataset example showed relative high resolution of the proposed MCOMP method with applications to the detection of low-frequency anomalies. These application examples all demonstrate more meaningful decomposition results and relative high convergence speed of the proposed algorithms.

  14. A high resolution spectrum reconstruction algorithm using compressive sensing theory

    NASA Astrophysics Data System (ADS)

    Zheng, Zhaoyu; Liang, Dakai; Liu, Shulin; Feng, Shuqing

    2015-07-01

    This paper proposes a quick spectrum scanning and reconstruction method using compressive sensing in composite structure. The strain field of corrugated structure is simulated by finite element analysis. Then the reflect spectrum is calculated using an improved transfer matrix algorithm. The K-means singular value decomposition sparse dictionary is trained . In the test the spectrum with limited sample points can be obtained and the high resolution spectrum is reconstructed by solving sparse representation equation. Compared with the other conventional basis, the effect of this method is better. The match rate of the recovered spectrum and the original spectrum is over 95%.

  15. MAUD (Multiattribute Utility Decomposition): An Interactive Computer Program for the Structuring, Decomposition, and Recomposition of Preferences between Multiattributed Alternatives

    DTIC Science & Technology

    1981-08-01

    attribute Utility Decomposition (MAUD) within the context of Multiattribute Utility Theory ( MAUT ). In section 3.2 we introduce MAUT as part of the...A decision-theoretic rationale for the MAUD algorithms with special reference to multiattribute utility theory , as well as the programming logic and...Investigation of Preference Structure ... ............. .. 12 Notes on MAUD Operation. ....... .................... . 17 3. MULTIATTRIBUTE UTILITY THEORY

  16. Hydrogen peroxide catalytic decomposition

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2010-01-01

    Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous process preceding decomposition in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation.

  17. Mode decomposition evolution equations

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be

  18. Algorithms for propagating uncertainty across heterogeneous domains

    SciTech Connect

    Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.

    2015-12-30

    We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.

  19. 3D building reconstruction from ALS data using unambiguous decomposition into elementary structures

    NASA Astrophysics Data System (ADS)

    Jarząbek-Rychard, M.; Borkowski, A.

    2016-08-01

    The objective of the paper is to develop an automated method that enables for the recognition and semantic interpretation of topological building structures. The novelty of the proposed modeling approach is an unambiguous decomposition of complex objects into predefined simple parametric structures, resulting in the reconstruction of one topological unit without independent overlapping elements. The aim of a data processing chain is to generate complete polyhedral models at LOD2 with an explicit topological structure and semantic information. The algorithms are performed on 3D point clouds acquired by airborne laser scanning. The presented methodology combines data-based information reflected in an attributed roof topology graph with common knowledge about buildings stored in a library of elementary structures. In order to achieve an appropriate balance between reconstruction precision and visualization aspects, the implemented library contains a set of structure-depended soft modeling rules instead of strictly defined geometric primitives. The proposed modeling algorithm starts with roof plane extraction performed by the segmentation of building point clouds, followed by topology identification and recognition of predefined structures. We evaluate the performance of the novel procedure by the analysis of the modeling accuracy and the degree of modeling detail. The assessment according to the validation methods standardized by the International Society for Photogrammetry and Remote Sensing shows that the completeness of the algorithm is above 80%, whereas the correctness exceeds 98%.

  20. Hydrogen iodide decomposition

    DOEpatents

    O'Keefe, Dennis R.; Norman, John H.

    1983-01-01

    Liquid hydrogen iodide is decomposed to form hydrogen and iodine in the presence of water using a soluble catalyst. Decomposition is carried out at a temperature between about 350.degree. K. and about 525.degree. K. and at a corresponding pressure between about 25 and about 300 atmospheres in the presence of an aqueous solution which acts as a carrier for the homogeneous catalyst. Various halides of the platinum group metals, particularly Pd, Rh and Pt, are used, particularly the chlorides and iodides which exhibit good solubility. After separation of the H.sub.2, the stream from the decomposer is countercurrently extracted with nearly dry HI to remove I.sub.2. The wet phase contains most of the catalyst and is recycled directly to the decomposition step. The catalyst in the remaining almost dry HI-I.sub.2 phase is then extracted into a wet phase which is also recycled. The catalyst-free HI-I.sub.2 phase is finally distilled to separate the HI and I.sub.2. The HI is recycled to the reactor; the I.sub.2 is returned to a reactor operating in accordance with the Bunsen equation to create more HI.

  1. Operationalizing hippocampal volume as an enrichment biomarker for amnestic mild cognitive impairment trials: effect of algorithm, test-retest variability, and cut point on trial cost, duration, and sample size.

    PubMed

    Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J

    2014-04-01

    The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations.

  2. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  3. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    NASA Astrophysics Data System (ADS)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2016-09-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  4. Process characteristics and layout decomposition of self-aligned sextuple patterning

    NASA Astrophysics Data System (ADS)

    Kang, Weiling; Chen, Yijian

    2013-03-01

    Self-aligned sextuple patterning (SASP) is a promising technique to scale down the half pitch of IC features to sub- 10nm region. In this paper, the process characteristics and decomposition methods of both positive-tone (pSASP) and negative-tone SASP (nSASP) techniques are discussed, and a variety of decomposition rules are studied. By using a node-grouping method, nSASP layout conflicting graph can be significantly simplified. Graph searching and coloring algorithm is developed for feature/color assignment. We demonstrate that by generating assisting mandrels, nSASP layout decomposition can be degenerated into an nSADP decomposition problem. The proposed decomposition algorithm is successfully verified with several commonly used 2-D layout examples.

  5. [The algorithm for the determination of the sufficient number of dynamic electroneurostimulation procedures based on the magnitude of individual testing voltage at the reference point].

    PubMed

    Chernysh, I M; Zilov, V G; Vasilenko, A M; Frolkov, V K

    2016-01-01

    This article was designed to present evidence of the advantages of the personified approach to the treatment of the patients presenting with arterial hypertension (AH), lumbar spinal dorsopathy (LSD), chronic obstructive pulmonary disease (COPD), and duodenal ulcer (DU) at the stage of exacerbation obtained by the measurements of testing voltage at the reference point (Utest).

  6. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  7. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  8. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    SciTech Connect

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  9. Art of spin decomposition

    SciTech Connect

    Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.

    2011-04-01

    We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.

  10. The Vector Decomposition Problem

    NASA Astrophysics Data System (ADS)

    Yoshida, Maki; Mitsunari, Shigeo; Fujiwara, Toru

    This paper introduces a new computational problem on a two-dimensional vector space, called the vector decomposition problem (VDP), which is mainly defined for designing cryptosystems using pairings on elliptic curves. We first show a relation between the VDP and the computational Diffie-Hellman problem (CDH). Specifically, we present a sufficient condition for the VDP on a two-dimensional vector space to be at least as hard as the CDH on a one-dimensional subspace. We also present a sufficient condition for the VDP with a fixed basis to have a trapdoor. We then give an example of vector spaces which satisfy both sufficient conditions and on which the CDH is assumed to be hard in previous work. In this sense, the intractability of the VDP is a reasonable assumption as that of the CDH.

  11. Arrhythmia ECG Noise Reduction by Ensemble Empirical Mode Decomposition

    PubMed Central

    Chang, Kang-Ming

    2010-01-01

    A novel noise filtering algorithm based on ensemble empirical mode decomposition (EEMD) is proposed to remove artifacts in electrocardiogram (ECG) traces. Three noise patterns with different power—50 Hz, EMG, and base line wander – were embedded into simulated and real ECG signals. Traditional IIR filter, Wiener filter, empirical mode decomposition (EMD) and EEMD were used to compare filtering performance. Mean square error between clean and filtered ECGs was used as filtering performance indexes. Results showed that high noise reduction is the major advantage of the EEMD based filter, especially on arrhythmia ECGs. PMID:22219702

  12. Resolving the sign ambiguity in the singular value decomposition.

    SciTech Connect

    Bro, Rasmus; Acar, Evrim; Kolda, Tamara Gibson

    2007-10-01

    Many modern data analysis methods involve computing a matrix singular value decomposition (SVD) or eigenvalue decomposition (EVD). Principal components analysis is the time-honored example, but more recent applications include latent semantic indexing, hypertext induced topic selection (HITS), clustering, classification, etc. Though the SVD and EVD are well-established and can be computed via state-of-the-art algorithms, it is not commonly mentioned that there is an intrinsic sign indeterminacy that can significantly impact the conclusions and interpretations drawn from their results. Here we provide a solution to the sign ambiguity problem and show how it leads to more sensible solutions.

  13. Direct Sum Decomposition of Groups

    ERIC Educational Resources Information Center

    Thaheem, A. B.

    2005-01-01

    Direct sum decomposition of Abelian groups appears in almost all textbooks on algebra for undergraduate students. This concept plays an important role in group theory. One simple example of this decomposition is obtained by using the kernel and range of a projection map on an Abelian group. The aim in this pedagogical note is to establish a direct…

  14. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  15. Research on an uplink carrier sense multiple access algorithm of large indoor visible light communication networks based on an optical hard core point process.

    PubMed

    Nan, Zhufen; Chi, Xuefen

    2016-12-20

    The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.

  16. Adaptive wavelet transform algorithm for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo

    2003-11-01

    A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.

  17. Decomposition in northern Minnesota peatlands

    SciTech Connect

    Farrish, K.W.

    1985-01-01

    Decomposition in peatlands was investigated in northern Minnesota. Four sites, an ombrotrophic raised bog, an ombrotrophic perched bog and two groundwater minerotrophic fens, were studied. Decomposition rates of peat and paper were estimated using mass-loss techniques. Environmental and substrate factors that were most likely to be responsible for limiting decomposition were monitored. Laboratory incubation experiments complemented the field work. Mass-loss over one year in one of the bogs, ranged from 11 percent in the upper 10 cm of hummocks to 1 percent at 60 to 100 cm depth in hollows. Regression analysis of the data for that bog predicted no mass-loss below 87 cm. Decomposition estimates on an area basis were 2720 and 6460 km/ha yr for the two bogs; 17,000 and 5900 kg/ha yr for the two fens. Environmental factors found to limit decomposition in these peatlands were reducing/anaerobic conditions below the water table and cool peat temperatures. Substrate factors found to limit decomposition were low pH, high content of resistant organics such as lignin, and shortages of available N and K. Greater groundwater influence was found to favor decomposition through raising the pH and perhaps by introducing limited amounts of dissolved oxygen.

  18. Terrestrial laser scanning and a degenerated cylinder model to determine gross morphological change of cadavers under conditions of natural decomposition.

    PubMed

    Zhang, Xiao; Glennie, Craig L; Bucheli, Sibyl R; Lindgren, Natalie K; Lynne, Aaron M

    2014-08-01

    Decomposition can be a highly variable process with stages that are difficult to quantify. Using high accuracy terrestrial laser scanning a repeated three-dimensional (3D) documentation of volumetric changes of a human body during early decomposition is recorded. To determine temporal volumetric variations as well as 3D distribution of the changed locations in the body over time, this paper introduces the use of multiple degenerated cylinder models to provide a reasonable approximation of body parts against which 3D change can be measured and visualized. An iterative closest point algorithm is used for 3D registration, and a method for determining volumetric change is presented. Comparison of the laser scanning estimates of volumetric change shows good agreement with repeated in-situ measurements of abdomen and limb circumference that were taken diurnally. The 3D visualizations of volumetric changes demonstrate that bloat is a process with a beginning, middle, and end rather than a state of presence or absence. Additionally, the 3D visualizations show conclusively that cadaver bloat is not isolated to the abdominal cavity, but also occurs in the limbs. Detailed quantification of the bloat stage of decay has the potential to alter how the beginning and end of bloat are determined by researchers and can provide further insight into the effects of the ecosystem on decomposition.

  19. COMPUTER SIMULATIONS WITH EXPLICIT SOLVENT: Recent Progress in the Thermodynamic Decomposition of Free Energies and in Modeling Electrostatic Effects

    NASA Astrophysics Data System (ADS)

    Levy, Ronald M.; Gallicchio, Emilio

    1998-10-01

    This review focuses on recent progress in two areas in which computer simulations with explicit solvent are being applied: the thermodynamic decomposition of free energies, and modeling electrostatic effects. The computationally intensive nature of these simulations has been an obstacle to the systematic study of many problems in solvation thermodynamics, such as the decomposition of solvation and ligand binding free energies into component enthalpies and entropies. With the revolution in computer power continuing, these problems are ripe for study but require the judicious choice of algorithms and approximations. We provide a critical evaluation of several numerical approaches to the thermodynamic decomposition of free energies and summarize applications in the current literature. Progress in computer simulations with explicit solvent of charge perturbations in biomolecules was slow in the early 1990s because of the widespread use of truncated Coulomb potentials in these simulations, among other factors. Development of the sophisticated technology described in this review to handle the long-range electrostatic interactions has increased the predictive power of these simulations to the point where comparisons between explicit and continuum solvent models can reveal differences that have their true physical origin in the inherent molecularity of the surrounding medium.

  20. Neural network based decomposition in optimal structural synthesis

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Berke, L.

    1992-01-01

    The present paper describes potential applications of neural networks in the multilevel decomposition based optimal design of structural systems. The generic structural optimization problem of interest, if handled as a single problem, results in a large dimensionality problem. Decomposition strategies allow for this problem to be represented by a set of smaller, decoupled problems, for which solutions may either be obtained with greater ease or may be obtained in parallel. Neural network models derived through supervised training, are used in two distinct modes in this work. The first uses neural networks to make available efficient analysis models for use in repetitive function evaluations as required by the optimization algorithm. In the second mode, neural networks are used to represent the coupling that exists between the decomposed subproblems. The approach is illustrated by application to the multilevel decomposition-based synthesis of representative truss and frame structures.

  1. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  2. Perfluoropolyalkylether decomposition on catalytic aluminas

    NASA Technical Reports Server (NTRS)

    Morales, Wilfredo

    1994-01-01

    The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25.

  3. Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.

    2001-01-01

    An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.

  4. Termites promote resistance of decomposition to spatiotemporal variability in rainfall.

    PubMed

    Veldhuis, Michiel P; Laso, Francisco J; Olff, Han; Berg, Matty P

    2017-02-01

    The ecological impact of rapid environmental change will depend on the resistance of key ecosystems processes, which may be promoted by species that exert strong control over local environmental conditions. Recent theoretical work suggests that macrodetritivores increase the resistance of African savanna ecosystems to changing climatic conditions, but experimental evidence is lacking. We examined the effect of large fungus-growing termites and other non-fungus-growing macrodetritivores on decomposition rates empirically with strong spatiotemporal variability in rainfall and temperature. Non-fungus-growing larger macrodetritivores (earthworms, woodlice, millipedes) promoted decomposition rates relative to microbes and small soil fauna (+34%) but both groups reduced their activities with decreasing rainfall. However, fungus-growing termites increased decomposition rates strongest (+123%) under the most water-limited conditions, making overall decomposition rates mostly independent from rainfall. We conclude that fungus-growing termites are of special importance in decoupling decomposition rates from spatiotemporal variability in rainfall due to the buffered environment they create within their extended phenotype (mounds), that allows decomposition to continue when abiotic conditions outside are less favorable. This points at a wider class of possibly important ecological processes, where soil-plant-animal interactions decouple ecosystem processes from large-scale climatic gradients. This may strongly alter predictions from current climate change models.

  5. Polar decomposition for attitude determination from vector observations

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1993-01-01

    This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.

  6. Decomposition time effectiveness for various synthesis strategies dedicated to FPGA structures

    NASA Astrophysics Data System (ADS)

    Kubica, Marcin; Kania, Dariusz; Opara, Adam

    2016-12-01

    The main goal of the paper is to compare the analyzed synthesis methods taking time effectiveness of the decomposition process into account. The basic difference between the compared methods is the function representation. Two of three analyzed synthesis algorithms (DekBDD and MultiDec) use function description in the form of BDD. In Decomp algorithm, which was the basis of developing DekBDD and MultiDec systems, the function was described in a table form. Thus, the paper includes the results of the experiments conducted for a set of benchmarks that indicate considerable advantage of decomposition algorithms in which the functions are represented in the form of BDD.

  7. Catalyst for sodium chlorate decomposition

    NASA Technical Reports Server (NTRS)

    Wydeven, T.

    1972-01-01

    Production of oxygen by rapid decomposition of cobalt oxide and sodium chlorate mixture is discussed. Cobalt oxide serves as catalyst to accelerate reaction. Temperature conditions and chemical processes involved are described.

  8. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  9. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  10. Hierarchical decomposition of metabolic networks using k-modules.

    PubMed

    Reimers, Arne C

    2015-12-01

    The optimal solutions obtained by flux balance analysis (FBA) are typically not unique. Flux modules have recently been shown to be a very useful tool to simplify and decompose the space of FBA-optimal solutions. Since yield-maximization is sometimes not the primary objective encountered in vivo, we are also interested in understanding the space of sub-optimal solutions. Unfortunately, the flux modules are too restrictive and not suited for this task. We present a generalization, called k-module, which compensates the limited applicability of flux modules to the space of sub-optimal solutions. Intuitively, a k-module is a sub-network with low connectivity to the rest of the network. Recursive application of k-modules yields a hierarchical decomposition of the metabolic network, which is also known as branch decomposition in matroid theory. In particular, decompositions computed by existing methods, like the null-space-based approach, introduced by Poolman et al. [(2007) J. Theor. Biol. 249: , 691-705] can be interpreted as branch decompositions. With k-modules we can now compare alternative decompositions of metabolic networks to the classical sub-systems of glycolysis, tricarboxylic acid (TCA) cycle, etc. They can be used to speed up algorithmic problems [theoretically shown for elementary flux modes (EFM) enumeration] and have the potential to present computational solutions in a more intuitive way independently from the classical sub-systems.

  11. Estimation of distribution algorithms with Kikuchi approximations.

    PubMed

    Santana, Roberto

    2005-01-01

    The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables.

  12. MAMAP - a new spectrometer system for column-averaged methane and carbon dioxide observations from aircraft: retrieval algorithm and first inversions for point source emission rates

    NASA Astrophysics Data System (ADS)

    Krings, T.; Gerilowski, K.; Buchwitz, M.; Reuter, M.; Tretner, A.; Erzinger, J.; Heinze, D.; Burrows, J. P.; Bovensmann, H.

    2011-04-01

    MAMAP is an airborne passive remote sensing instrument designed for measuring columns of methane (CH4) and carbon dioxide (CO2). The MAMAP instrument consists of two optical grating spectrometers: One in the short wave infrared band (SWIR) at 1590-1690 nm to measure CO2 and CH4 absorptions and another one in the near infrared (NIR) at 757-768 nm to measure O2 absorptions for reference purposes. MAMAP can be operated in both nadir and zenith geometry during the flight. Mounted on an airplane MAMAP can effectively survey areas on regional to local scales with a ground pixel resolution of about 29 m × 33 m for a typical aircraft altitude of 1250 m and a velocity of 200 km h-1. The retrieval precision of the measured column relative to background is typically ≲ 1% (1σ). MAMAP can be used to close the gap between satellite data exhibiting global coverage but with a rather coarse resolution on the one hand and highly accurate in situ measurements with sparse coverage on the other hand. In July 2007 test flights were performed over two coal-fired powerplants operated by Vattenfall Europe Generation AG: Jänschwalde (27.4 Mt CO2 yr-1) and Schwarze Pumpe (11.9 Mt CO2 yr-1), about 100 km southeast of Berlin, Germany. By using two different inversion approaches, one based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data, and another using a simple Gaussian integral method, the emission rates can be determined and compared with emissions as stated by Vattenfall Europe. An extensive error analysis for the retrieval's dry column results (XCO2 and XCH4) and for the two inversion methods has been performed. Both methods - the Gaussian plume model fit and the Gaussian integral method - are capable of delivering reliable estimates for strong point source emission rates, given appropriate flight patterns and detailed knowledge of wind conditions.

  13. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  14. Gaussian Decomposition of Laser Altimeter Waveforms

    NASA Technical Reports Server (NTRS)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  15. Refining signal decomposition for GRETINA detectors

    NASA Astrophysics Data System (ADS)

    Prasher, V. S.; Campbell, C. M.; Cromaz, M.; Crawford, H. L.; Wiens, A.; Lee, I. Y.; Macchiavelli, A. O.; Lister; Merchan, E.; Chowdhury, P.; Radford, D. C.

    2013-04-01

    The reconstruction of the original direction and energy of gamma rays through locating their interaction points in solid state detectors is a crucial evolving technology for nuclear physics, space science and homeland security. New arrays AGATA and GRETINA have been built for nuclear science based on highly segmented germanium crystals. The signal decomposition process fits the observed waveform from each crystal segment with a linear combination of pre-calculated basis signals. This process occurs on an event-by-event basis in real time to extract the position and energy of γ-ray interactions. The methodology for generating a basis of pulse shapes, varying according to the position of the charge generating interactions, is in place. Improvements in signal decomposition can be realized by better modeling the crystals. Specifically, a better understanding of the true impurity distributions, internal electric fields, and charge mobilities will lead to more reliable bases, more precise definition of the interaction points, and hence more reliable tracking. In this presentation we will cover the current state-of-the-art for basis generation and then discuss the sensitivity of the predicted pulse shapes when varying some key parameters.

  16. MAMAP - a new spectrometer system for column-averaged methane and carbon dioxide observations from aircraft: retrieval algorithm and first inversions for point source emission rates

    NASA Astrophysics Data System (ADS)

    Krings, T.; Gerilowski, K.; Buchwitz, M.; Reuter, M.; Tretner, A.; Erzinger, J.; Heinze, D.; Pflüger, U.; Burrows, J. P.; Bovensmann, H.

    2011-09-01

    MAMAP is an airborne passive remote sensing instrument designed to measure the dry columns of methane (CH4) and carbon dioxide (CO2). The MAMAP instrument comprises two optical grating spectrometers: the first observing in the short wave infrared band (SWIR) at 1590-1690 nm to measure CO2 and CH4 absorptions, and the second in the near infrared (NIR) at 757-768 nm to measure O2 absorptions for reference/normalisation purposes. MAMAP can be operated in both nadir and zenith geometry during the flight. Mounted on an aeroplane, MAMAP surveys areas on regional to local scales with a ground pixel resolution of approximately 29 m × 33 m for a typical aircraft altitude of 1250 m and a velocity of 200 km h-1. The retrieval precision of the measured column relative to background is typically ≲ 1% (1σ). MAMAP measurements are valuable to close the gap between satellite data, having global coverage but with a rather coarse resolution, on the one hand, and highly accurate in situ measurements with sparse coverage on the other hand. In July 2007, test flights were performed over two coal-fired power plants operated by Vattenfall Europe Generation AG: Jänschwalde (27.4 Mt CO2 yr-1) and Schwarze Pumpe (11.9 Mt CO2 yr-1), about 100 km southeast of Berlin, Germany. By using two different inversion approaches, one based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data, and another using a simple Gaussian integral method, the emission rates can be determined and compared with emissions reported by Vattenfall Europe. An extensive error analysis for the retrieval's dry column results (XCO2 and XCH4) and for the two inversion methods has been performed. Both methods - the Gaussian plume model fit and the Gaussian integral method - are capable of deriving estimates for strong point source emission rates that are within ±10% of the reported values, given appropriate flight patterns and detailed knowledge of wind conditions.

  17. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  18. A domain decomposition scheme for Eulerian shock physics codes

    SciTech Connect

    Bell, R.L.; Hertel, E.S. Jr.

    1994-08-01

    A new algorithm which allows for complex domain decomposition in Eulerian codes was developed at Sandia National Laboratories. This new feature allows a user to customize the zoning for each portion of a calculation and to refine volumes of the computational space of particular interest This option is available in one, two, and three dimensions. The new technique will be described in detail and several examples of the effectiveness of this technique will also be discussed.

  19. LUPOD: Collocation in POD via LU decomposition

    NASA Astrophysics Data System (ADS)

    Rapún, M.-L.; Terragni, F.; Vega, J. M.

    2017-04-01

    A collocation method is developed for the (truncated) POD of a set of snapshots. In other words, POD computations are performed using only a set of collocation points, whose number is comparable to the number of retained modes, in a similar fashion as in collocation spectral methods. Intending to rely on simple ideas which, moreover, are consistent with the essence of POD, collocation points are computed via the LU decomposition with pivoting of the snapshot matrix. The new method is illustrated in simple applications in which POD is used as a data-processing method. The performance of the method is tested in the computationally efficient construction of reduced order models based on POD plus Galerkin projection for the complex Ginzburg-Landau equation in one and two space dimensions.

  20. Domain Decomposition Methods for Problems in H(curl)

    NASA Astrophysics Data System (ADS)

    Calvo, Juan Gabriel

    Two domain decomposition methods for solving vector field problems posed in H(curl) and discretized with Nedelec finite elements are considered. These finite elements are conforming in H(curl). A two-level overlapping Schwarz algorithm in two dimensions is analyzed, where the subdomains are only assumed to be uniform in the sense of Peter Jones. The coarse space is based on energy minimization and its dimension equals the number of interior subdomain edges. Local direct solvers are based on the overlapping subdomains. The bound for the condition number depends only on a few geometric parameters of the decomposition. This bound is independent of jumps in the coefficients across the interface between the subdomains for most of the different cases considered. A bound is also obtained for the condition number of a balancing domain decomposition by constraints (BDDC) algorithm in two dimensions, with Jones subdomains. For the primal variable space, a continuity constraint for the tangential average over each interior subdomain edge is imposed. For the averaging operator, a new technique named deluxe scaling is used. The optimal bound is independent of jumps in the coefficients across the interface between the subdomains. Furthermore, a new coarse function for problems in three dimensions is introduced, with only one degree of freedom per subdomain edge. In all the cases, it is established that the algorithms are scalable. Numerical results that verify the results are provided, including some with subdomains with fractal edges and others obtained by a mesh partitioner.

  1. Variational mode decomposition denoising combined with the Hausdorff distance

    NASA Astrophysics Data System (ADS)

    Ma, Wenping; Yin, Shuxin; Jiang, Chunlei; Zhang, Yansheng

    2017-03-01

    Variational mode decomposition (VMD) is a recently introduced adaptive signal decomposition algorithm with a solid theoretical foundation and good noise robustness compared with empirical mode decomposition (EMD). However, there is still a problem with this algorithm associated with the selection of relevant modes. To solve this problem, this paper proposes a novel signal-filtering method that combines VMD with Hausdorff distance (HD) in the VMD-HD method. A noisy signal is first decomposed into a given number K of band-limited intrinsic mode functions by VMD. The probability density function is then estimated for each mode. The aim of this method is to reconstruct the signal using the relevant modes, which are selected on the basis of noticeable similarities between the probability density function of the input signal and that of each mode. Various similarity measures are investigated and compared, and the HD is shown to offer the best performance. The results of filtering of simulation signals illustrate the validity of the proposed method when compared with EMD-based methods under comprehensive quantitative evaluation criteria. As a specific example, the proposed method is successfully used for filtering the pipeline leakage signal as evaluated by the de-trended fluctuation analysis algorithm.

  2. Finding Hierarchical and Overlapping Dense Subgraphs using Nucleus Decompositions

    SciTech Connect

    Seshadhri, Comandur; Pinar, Ali; Sariyuce, Ahmet Erdem; Catalyurek, Umit

    2014-11-01

    Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \\true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account for overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.

  3. SOI layout decomposition for double patterning lithography on high-performance computer platforms

    NASA Astrophysics Data System (ADS)

    Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir

    2014-12-01

    In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.

  4. An overview of statistical decomposition techniques applied to complex systems

    PubMed Central

    Tuncer, Yalcin; Tanik, Murat M.; Allison, David B.

    2009-01-01

    The current state of the art in applied decomposition techniques is summarized within a comparative uniform framework. These techniques are classified by the parametric or information theoretic approaches they adopt. An underlying structural model common to all parametric approaches is outlined. The nature and premises of a typical information theoretic approach are stressed. Some possible application patterns for an information theoretic approach are illustrated. Composition is distinguished from decomposition by pointing out that the former is not a simple reversal of the latter. From the standpoint of application to complex systems, a general evaluation is provided. PMID:19724659

  5. Singular value decomposition for collaborative filtering on a GPU

    NASA Astrophysics Data System (ADS)

    Kato, Kimikazu; Hosino, Tikara

    2010-06-01

    A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.

  6. TU-F-18A-02: Iterative Image-Domain Decomposition for Dual-Energy CT

    SciTech Connect

    Niu, T; Dong, X; Petrongolo, M; Zhu, L

    2014-06-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative

  7. Domain Decomposition and Load Balancing in the Amtran Neutron Transport Code

    SciTech Connect

    Compton, J; Clouse, C

    2003-07-07

    Effective spatial domain decomposition for discrete ordinate (Sn) neutron transport calculations has been critical for exploiting massively parallel architectures typified by the ASCI White computer at Lawrence Livermore National Laboratory. A combination of geometrical and computational constraints has posed a unique challenge as problems have been scaled up to several thousand processors. Carefully scripted decomposition and corresponding execution algorithms have been developed to handle a range of geometrical and hardware configurations.

  8. Point-cloud-to-point-cloud technique on tool calibration for dental implant surgical path tracking

    NASA Astrophysics Data System (ADS)

    Lorsakul, Auranuch; Suthakorn, Jackrit; Sinthanayothin, Chanjira

    2008-03-01

    Dental implant is one of the most popular methods of tooth root replacement used in prosthetic dentistry. Computerize navigation system on a pre-surgical plan is offered to minimize potential risk of damage to critical anatomic structures of patients. Dental tool tip calibrating is basically an important procedure of intraoperative surgery to determine the relation between the hand-piece tool tip and hand-piece's markers. With the transferring coordinates from preoperative CT data to reality, this parameter is a part of components in typical registration problem. It is a part of navigation system which will be developed for further integration. A high accuracy is required, and this relation is arranged by point-cloud-to-point-cloud rigid transformations and singular value decomposition (SVD) for minimizing rigid registration errors. In earlier studies, commercial surgical navigation systems from, such as, BrainLAB and Materialize, have flexibility problem on tool tip calibration. Their systems either require a special tool tip calibration device or are unable to change the different tool. The proposed procedure is to use the pointing device or hand-piece to touch on the pivot and the transformation matrix. This matrix is calculated every time when it moves to the new position while the tool tip stays at the same point. The experiment acquired on the information of tracking device, image acquisition and image processing algorithms. The key success is that point-to-point-cloud requires only 3 post images of tool to be able to converge to the minimum errors 0.77%, and the obtained result is correct in using the tool holder to track the path simulation line displayed in graphic animation.

  9. Thermal decomposition products of butyraldehyde

    NASA Astrophysics Data System (ADS)

    Hatten, Courtney D.; Kaskey, Kevin R.; Warner, Brian J.; Wright, Emily M.; McCunn, Laura R.

    2013-12-01

    The thermal decomposition of gas-phase butyraldehyde, CH3CH2CH2CHO, was studied in the 1300-1600 K range with a hyperthermal nozzle. Products were identified via matrix-isolation Fourier transform infrared spectroscopy and photoionization mass spectrometry in separate experiments. There are at least six major initial reactions contributing to the decomposition of butyraldehyde: a radical decomposition channel leading to propyl radical + CO + H; molecular elimination to form H2 + ethylketene; a keto-enol tautomerism followed by elimination of H2O producing 1-butyne; an intramolecular hydrogen shift and elimination producing vinyl alcohol and ethylene, a β-C-C bond scission yielding ethyl and vinoxy radicals; and a γ-C-C bond scission yielding methyl and CH2CH2CHO radicals. The first three reactions are analogous to those observed in the thermal decomposition of acetaldehyde, but the latter three reactions are made possible by the longer alkyl chain structure of butyraldehyde. The products identified following thermal decomposition of butyraldehyde are CO, HCO, CH3CH2CH2, CH3CH2CH=C=O, H2O, CH3CH2C≡CH, CH2CH2, CH2=CHOH, CH2CHO, CH3, HC≡CH, CH2CCH, CH3C≡CH, CH3CH=CH2, H2C=C=O, CH3CH2CH3, CH2=CHCHO, C4H2, C4H4, and C4H8. The first ten products listed are direct products of the six reactions listed above. The remaining products can be attributed to further decomposition reactions or bimolecular reactions in the nozzle.

  10. Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.

    ERIC Educational Resources Information Center

    Alexopoulos, John; Abraham, Paul

    2001-01-01

    Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…

  11. A decomposition method based on a model of continuous change.

    PubMed

    Horiuchi, Shiro; Wilmoth, John R; Pletcher, Scott D

    2008-11-01

    A demographic measure is often expressed as a deterministic or stochastic function of multiple variables (covariates), and a general problem (the decomposition problem) is to assess contributions of individual covariates to a difference in the demographic measure (dependent variable) between two populations. We propose a method of decomposition analysis based on an assumption that covariates change continuously along an actual or hypothetical dimension. This assumption leads to a general model that logically justifies the additivity of covariate effects and the elimination of interaction terms, even if the dependent variable itself is a nonadditive function. A comparison with earlier methods illustrates other practical advantages of the method: in addition to an absence of residuals or interaction terms, the method can easily handle a large number of covariates and does not require a logically meaningful ordering of covariates. Two empirical examples show that the method can be applied flexibly to a wide variety of decomposition problems. This study also suggests that when data are available at multiple time points over a long interval, it is more accurate to compute an aggregated decomposition based on multiple subintervals than to compute a single decomposition for the entire study period.

  12. THE DECOMPOSITION OF HYDROGEN PEROXIDE BY LIVER CATALASE

    PubMed Central

    Williams, John

    1928-01-01

    1. The velocity of decomposition of hydrogen peroxide by catalase as a function of (a) concentration of catalase, (b) concentration of hydrogen peroxide, (c) hydrogen ion concentration, (d) temperature has been studied in an attempt to correlate these variables as far as possible. It is concluded that the reaction involves primarily adsorption of hydrogen peroxide at the catalase surface. 2. The decomposition of hydrogen peroxide by catalase is regarded as involving two reactions, namely, the catalytic decomposition of hydrogen peroxide, which is a maximum at the optimum pH 6.8 to 7.0, and the "induced inactivation" of catalase by the "nascent" oxygen produced by the hydrogen peroxide and still adhering to the catalase surface. This differs from the more generally accepted view, namely that the induced inactivation is due to the H2O2 itself. On the basis of the above view, a new interpretation is given to the equation of Yamasaki and the connection between the equations of Yamasaki and of Northrop is pointed out. It is shown that the velocity of induced inactivation is a minimum at the pH which is optimal for the decomposition of hydrogen peroxide. 3. The critical increment of the catalytic decomposition of hydrogen peroxide by catalase is of the order 3000 calories. The critical increment of induced inactivation is low in dilute hydrogen peroxide solutions but increases to a value of 30,000 calories in concentrated solutions of peroxide. PMID:19872400

  13. Challenges of Diagnosing Acute HIV-1 Subtype C Infection in African Women: Performance of a Clinical Algorithm and the Need for Point-of-Care Nucleic-Acid Based Testing

    PubMed Central

    Mlisana, Koleka; Sobieszczyk, Magdalena; Werner, Lise; Feinstein, Addi; van Loggerenberg, Francois; Naicker, Nivashnee; Williamson, Carolyn; Garrett, Nigel

    2013-01-01

    Background Prompt diagnosis of acute HIV infection (AHI) benefits the individual and provides opportunities for public health intervention. The aim of this study was to describe most common signs and symptoms of AHI, correlate these with early disease progression and develop a clinical algorithm to identify acute HIV cases in resource limited setting. Methods 245 South African women at high-risk of HIV-1 were assessed for AHI and received monthly HIV-1 antibody and RNA testing. Signs and symptoms at first HIV-positive visit were compared to HIV-negative visits. Logistic regression identified clinical predictors of AHI. A model-based score was assigned to each predictor to create a risk score for every woman. Results Twenty-eight women seroconverted after a total of 390 person-years of follow-up with an HIV incidence of 7.2/100 person-years (95%CI 4.5–9.8). Fifty-seven percent reported ≥1 sign or symptom at the AHI visit. Factors predictive of AHI included age <25 years (OR = 3.2; 1.4–7.1), rash (OR = 6.1; 2.4–15.4), sore throat (OR = 2.7; 1.0–7.6), weight loss (OR = 4.4; 1.5–13.4), genital ulcers (OR = 8.0; 1.6–39.5) and vaginal discharge (OR = 5.4; 1.6–18.4). A risk score of 2 correctly predicted AHI in 50.0% of cases. The number of signs and symptoms correlated with higher HIV-1 RNA at diagnosis (r = 0.63; p<0.001). Conclusions Accurate recognition of signs and symptoms of AHI is critical for early diagnosis of HIV infection. Our algorithm may assist in risk-stratifying individuals for AHI, especially in resource-limited settings where there is no routine testing for AHI. Independent validation of the algorithm on another cohort is needed to assess its utility further. Point-of-care antigen or viral load technology is required, however, to detect asymptomatic, antibody negative cases enabling early interventions and prevention of transmission. PMID:23646162

  14. The ecology of carrion decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Carrion, or the remains of dead animals, is something that most people would like to avoid. It is visually unpleasant, emits foul odors, and may be the source of numerous pathogens. Decomposition of carrion, however, provides a unique opportunity for scientists to investigate how nutrients cycle t...

  15. Microbial interactions during carrion decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This addresses the microbial ecology of carrion decomposition in the age of metagenomics. It describes what is known about the microbial communities on carrion, including a brief synopsis about the communities on other organic matter sources. It provides a description of studies using state-of-the...

  16. Cadaver decomposition in terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Carter, David O.; Yellowlees, David; Tibbett, Mark

    2007-01-01

    A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon decomposition. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver decomposition remains a neglected microsere. Here we review the processes associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver decomposition can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver decomposition island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems.

  17. An analysis of scatter decomposition

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1990-01-01

    A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.

  18. Empirical mode decomposition as a time-varying multirate signal processing system

    NASA Astrophysics Data System (ADS)

    Yang, Yanli

    2016-08-01

    Empirical mode decomposition (EMD) can adaptively split composite signals into narrow subbands termed intrinsic mode functions (IMFs). Although an analytical expression of IMFs extracted by EMD from signals is introduced in Yang et al. (2013) [1], it is only used for the case of extrema spaced uniformly. In this paper, the EMD algorithm is analyzed from digital signal processing perspective for the case of extrema spaced nonuniformly. Firstly, the extrema extraction is represented by a time-varying extrema decimator. The nonuniform extrema extraction is analyzed through modeling the time-varying extrema decimation at a fixed time point as a time-invariant decimation. Secondly, by using the impulse/summation approach, spline interpolation for knots spaced nonuniformly is shown as two basic operations, time-varying interpolation and filtering by a time-varying spline filter. Thirdly, envelopes of signals are written as the output of the time-varying spline filter. An expression of envelopes of signals in both time and frequency domain is presented. The EMD algorithm is then described as a time-varying multirate signal processing system. Finally, an equation to model IMFs is derived by using a matrix formulation in time domain for the general case of extrema spaced nonuniformly.

  19. Dynamics of photospheric bright points in G-band derived from two fully automated algorithms. (Slovak Title: Dynamika fotosférických jasných bodov v G-páse odvodená použitím dvoch plne automatických algoritmov)

    NASA Astrophysics Data System (ADS)

    Bodnárová, M.; Rybák, J.; Hanslmeier, A.; Utz, D.

    2010-12-01

    Concentrations of small-scale magnetic field in the solar photosphere can be identified in the G-band of the solar spectrum as bright points. Studying the dynamics of the bright points in the G-band (BPGBs) can also help in addressing many issues related to the problem of the solar corona heating. In this work, we have used a set of 142 specled images in the G-band taken by the Dutch Open Telescope (DOT) on 19 October 2005 to make a comparison of two fully automated algorithms identifying BPGBs: an algorithm developed by Utz et al. (2009, 2010), and an algorithm developed following the work of Berger et al. (1995, 1998). We then followed in time and space motion of the BPGBs identified by both algorithms and constructed the distributions of their lifetimes, sizes and speeds. The results show that both algorithms give very similar results for the BPGB lifetimes and speeds, but their results vary significantly for the sizes of the identified BPGBs. This difference is due to the fact that in the case of the Berger et al. identification algorithm no additional criteria were applied to constrain the allowed BPGB sizes. As a result in further studies of the BPGB dynamics we will prefer to use the Utz algorithm to identify and track BPGBs.

  20. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  1. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  2. Decomposition patterns in terrestrial and intertidal habitats on Oahu Island and Coconut Island, Hawaii.

    PubMed

    Davis, J B; Goff, M L

    2000-07-01

    Decomposition studies were conducted at two sites on the Island of Oahu, Hawaii, to compare patterns of decomposition and arthropod invasion in intertidal and adjacent terrestrial habitats. The animal model used was the domestic pig. One site was on Coconut Island in Kaneohe Bay on the northeast side of Oahu, and the second was conducted in an anchialine pool located at Barber's Point Naval Air Station on the southwest shore of Oahu. At both sites, the terrestrial animal decomposed in a manner similar to what has been observed in previous studies in terrestrial habitats on the island of Oahu. Rate of biomass depletion was slower in both intertidal studies, and decomposition was primarily due to tide and wave activity and bacterial decomposition. No permanent colonization of carcasses by insects was seen for the intertidal carcass at Coconut Island. At the anchialine pool at Barber's Point Naval Air Station, Diptera larvae were responsible for biomass removal until the carcass was reduced below the water line and, from that point on, bacterial action was the means of decomposition. Marine and terrestrial scavengers were present at both sites although their impact on decomposition was negligible. Five stages of decomposition were recognized for the intertidal sites: fresh, buoyant/floating, deterioration/disintegration, buoyant remains, and scattered skeletal.

  3. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  4. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE PAGES

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; ...

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  5. Investigating hydrogel dosimeter decomposition by chemical methods

    NASA Astrophysics Data System (ADS)

    Jordan, Kevin

    2015-01-01

    The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products.

  6. A parallel householder tridiagonalization stratagem using scattered row decomposition

    NASA Technical Reports Server (NTRS)

    Chang, H. Y.; Utku, S.; Salama, M.; Rapp, D.

    1988-01-01

    Householder's method for tridiagonalizing a real symmetric matrix, a major step in evaluating eigenvalues of the matrix, is modified into a parallel algorithm for a concurrent machine of message passing type. Each processor of the concurrent machine has its own CPU, communications control and local memory. Messages are passed through connections between processors. Although the basic algorithm is inherently serial, the computations can be spread over all processors by scattering different rows of the matrix into processors, hence the term 'Scattered Row Decomposition'. The steps in the serial and the parallel algorithms are identified. Expressions for efficiency and speedup are given in terms of problem and machine parameters. For a concurrent machine of ring type interconnection, a selected representative problem of large order exhibits efficiency approaching 66 per cent.

  7. Thermal decomposition and non-isothermal decomposition kinetics of carbamazepine

    NASA Astrophysics Data System (ADS)

    Qi, Zhen-li; Zhang, Duan-feng; Chen, Fei-xiong; Miao, Jun-yan; Ren, Bao-zeng

    2014-12-01

    The thermal stability and kinetics of isothermal decomposition of carbamazepine were studied under isothermal conditions by thermogravimetry (TGA) and differential scanning calorimetry (DSC) at three heating rates. Particularly, transformation of crystal forms occurs at 153.75°C. The activation energy of this thermal decomposition process was calculated from the analysis of TG curves by Flynn-Wall-Ozawa, Doyle, distributed activation energy model, Šatava-Šesták and Kissinger methods. There were two different stages of thermal decomposition process. For the first stage, E and log A [s-1] were determined to be 42.51 kJ mol-1 and 3.45, respectively. In the second stage, E and log A [s-1] were 47.75 kJ mol-1 and 3.80. The mechanism of thermal decomposition was Avrami-Erofeev (the reaction order, n = 1/3), with integral form G(α) = [-ln(1 - α)]1/3 (α = ˜0.1-0.8) in the first stage and Avrami-Erofeev (the reaction order, n = 1) with integral form G(α) = -ln(1 - α) (α = ˜0.9-0.99) in the second stage. Moreover, Δ H ≠, Δ S ≠, Δ G ≠ values were 37.84 kJ mol-1, -192.41 J mol-1 K-1, 146.32 kJ mol-1 and 42.68 kJ mol-1, -186.41 J mol-1 K-1, 156.26 kJ mol-1 for the first and second stage, respectively.

  8. The Effect of Clothing on the Rate of Decomposition and Diptera Colonization on Sus scrofa Carcasses.

    PubMed

    Card, Allison; Cross, Peter; Moffatt, Colin; Simmons, Tal

    2015-07-01

    Twenty Sus scrofa carcasses were used to study the effect the presence of clothing had on decomposition rate and colonization locations of Diptera species; 10 unclothed control carcasses were compared to 10 clothed experimental carcasses over 58 days. Data collection occurred at regular accumulated degree day intervals; the level of decomposition as Total Body Score (TBSsurf ), pattern of decomposition, and Diptera present was documented. Results indicated a statistically significant difference in the rate of decomposition, (t427  = 2.59, p = 0.010), with unclothed carcasses decomposing faster than clothed carcasses. However, the overall decomposition rates from each carcass group are too similar to separate when applying a 95% CI, which means that, although statistically significant, from a practical forensic point of view they are not sufficiently dissimilar as to warrant the application of different formulae to estimate the postmortem interval. Further results demonstrated clothing provided blow flies with additional colonization locations.

  9. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  10. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  11. Fast autodidactic adaptive equalization algorithms

    NASA Astrophysics Data System (ADS)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  12. QCCM Center for Quantum Algorithms

    DTIC Science & Technology

    2008-10-17

    and A. Ekert and C. Macchiavello and M. Mosca quant-ph/0609160v1 Phase map decompositions for unitaries Niel de Beaudrap, Vincent Danos, Elham...Quantum Algorithms and Complexity M. Mosca Proceedings of NATO ASI Quantum Computation and Information 2005, Chania, Crete, Greece, IOS Press (2006), in...press Quantum Cellular Automata and Single Spin Measurement C. Perez, D. Cheung, M. Mosca , P. Cappellaro, D. Cory Proceedings of Asian Conference on

  13. Anisotropic decomposition of energetic materials

    SciTech Connect

    Pravica, Michael; Quine, Zachary; Romano, Edward; Bajar, Sean; Yulga, Brian; Yang Wenge; Hooks, Daniel

    2007-12-12

    Using a white x-ray synchrotron beam, we have dynamically studied radiation-induced decomposition in single crystalline PETN and TATB. By monitoring the integrated intensity of selected diffraction spots via a CCD x-ray camera as a function of time, we have found that the decomposition rate varies dramatically depending upon the orientation of the crystalline axes relative to polarized x-ray beam and for differing diffracting conditions (spots) within the same crystalline orientation. We suggest that this effect is due to Compton scattering of the polarized x-rays with electron clouds that is dependent upon their relative orientation. This novel effect may yield valuable insight regarding anisotropic detonation sensitivity in energetic materials such as PETN.

  14. Singular value decomposition and density estimation for filtering and analysis of gene expression

    SciTech Connect

    Rechtsteiner, A.; Gottardo, R.; Rocha, L. M.; Wall, M. E.

    2003-01-01

    We present three algorithms for gene expression analysis. Algorithm 1, known as serial correlation test, is used for filtering out noisy gene expression profiles. Algorithm 2 and 3 project the gene expression profiles into 2-dimensional expression subspaces ident ifiecl by Singular Value Decomposition. Density estimates a e used to determine expression profiles that have a high correlation with the subspace and low levels of noise. High density regions in the projection, clusters of co-expressed genes, are identified. We illustrate the algorithms by application to the yeast cell-cycle data by Cho et.al. and comparison of the results.

  15. Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation

    DOE PAGES

    Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir

    2016-05-01

    We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.

  16. A practical scheme of the sigma-point Kalman filter for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    Tang, Youmin; Deng, Ziwang; Manoj, K. K.; Chen, Dake

    2014-03-01

    applying a sigma-point Kalman filter (SPKF) to a high-dimensional system such as the oceanic general circulation model (OGCM), a major challenge is to reduce its heavy burden of storage memory and costly computation. In this study, we propose a new scheme for SPKF to address these issues. First, a reduced rank SPKF was introduced on the high-dimensional model state space using the truncated single value decomposition (TSVD) method (T-SPKF). Second, the relationship of SVDs between the model state space and a low-dimensional ensemble space is used to construct sigma points on the ensemble space (ET-SPKF). As such, this new scheme greatly reduces the demand of memory storage and computational cost and makes the SPKF method applicable to high-dimensional systems. Two numerical models are used to test and validate the ET-SPKF algorithm. The first model is the 40-variable Lorenz model, which has been a test bed of new assimilation algorithms. The second model is a realistic OGCM for the assimilation of actual observations, including Argo and in situ observations over the Pacific Ocean. The experiments show that ET-SPKF is computationally feasible for high-dimensional systems and capable of precise analyses. In particular, for realistic oceanic assimilations, the ET-SPKF algorithm can significantly improve oceanic analysis and improve ENSO prediction. A comparison between the ET-SPKF algorithm and EnKF (ensemble Kalman filter) is also tribally conducted using the OGCM and actual observations.

  17. Aflatoxin decomposition in various soils

    SciTech Connect

    Angle, J.S.

    1986-08-01

    The persistence of aflatoxin in the soil environment could potentially result in a number of adverse environmental consequences. To determine the persistence of aflatoxin in soil, /sup 14/C-labeled aflatoxin B1, was added to silt loam, sandy loam, and silty clay loam soils and the subsequent release of /sup 14/CO/sub 2/ was determined. After 120 days of incubation, 8.1% of the original aflatoxin added to the silt loam soil was released as CO/sub 2/. Aflatoxin decomposition in the sandy loam soil proceeded more quickly than the other two soils for the first 20 days of incubation. After this time, the decomposition rate declined and by the end of the study, 4.9% of the aflatoxin was released as CO/sub 2/. Aflatoxin decomposition proceeded most slowly in the silty clay loam soil. Only 1.4% of aflatoxin added to the soil was released as CO/sub 2/ after 120 days incubation. To determine whether aflatoxin was bound to the silty clay loam soil, aflatoxin B1 was added to this soil and incubated for 20 days. The soil was periodically extracted and the aflatoxin species present were determined using thin layer chromatographic (TLC) procedures. After one day of incubation, the degradation products, aflatoxins B2 and G2, were observed. It was also found that much of the aflatoxin extracted from the soil was not mobile with the TLC solvent system used. This indicated that a conjugate may have formed and thus may be responsible for the lack of aflatoxin decomposition.

  18. Phlogopite Decomposition, Water, and Venus

    NASA Technical Reports Server (NTRS)

    Johnson, N. M.; Fegley, B., Jr.

    2005-01-01

    Venus is a hot and dry planet with a surface temperature of 660 to 740 K and 30 parts per million by volume (ppmv) water vapor in its lower atmosphere. In contrast Earth has an average surface temperature of 288 K and 1-4% water vapor in its troposphere. The hot and dry conditions on Venus led many to speculate that hydrous minerals on the surface of Venus would not be there today even though they might have formed in a potentially wetter past. Thermodynamic calculations predict that many hydrous minerals are unstable under current Venusian conditions. Thermodynamics predicts whether a particular mineral is stable or not, but we need experimental data on the decomposition rate of hydrous minerals to determine if they survive on Venus today. Previously, we determined the decomposition rate of the amphibole tremolite, and found that it could exist for billions of years at current surface conditions. Here, we present our initial results on the decomposition of phlogopite mica, another common hydrous mineral on Earth.

  19. Methanethiol decomposition on Ni(100)

    SciTech Connect

    Castro, M.E.; Ahkter, S.; Golchet, A.; White, J.M. ); Sahin, T. )

    1991-01-01

    Static secondary ion mass spectroscopy (SSIMS), temperature programmed desorption (TPD), and Auger electron spectroscopy (AES) were used under ultrahigh vacuum conditions to study the decomposition of CH{sub 3}SH on Ni(100). Only methane, hydrogen, and the parent molecule are observed in TPD. Complete decomposition to C(a), S(a) and desorbing H{sub 2} is the preferred reaction pathway for low exposures, while desorption of methane is observed at higher coverages. Preadsorbed hydrogen promoted methane desorption. Upon adsorption, and for low coverages, SSIMS evidence indicates S-H bond cleavage into CH{sub 3}S and surface hydrogen. S-H bond cleavage is inhibited for high coverages. The TP-SSIMS data are consistent with an activated C-S bond cleavage in CH{sub 3}S, with an activation energy of 8.81 kcal/mol and preexponential factor of 10{sup 6.5}s{sup {minus}1}. The low preexponential factor is taken as indicating a complex decomposition pathway. A mechanism consistent with the observed data is discussed.

  20. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  1. Implementation of parallel matrix decomposition for NIKE3D on the KSR1 system

    SciTech Connect

    Su, Philip S.; Fulton, R.E.; Zacharia, T.

    1995-06-01

    New massively parallel computer architecture has revolutionized the design of computer algorithms and promises to have significant influence on algorithms for engineering computations. Realistic engineering problems using finite element analysis typically imply excessively large computational requirements. Parallel supercomputers that have the potential for significantly increasing calculation speeds can meet these computational requirements. This report explores the potential for the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm on NIKE3D through actual computations. The examples of two- and three-dimensional nonlinear dynamic finite element problems are presented on the Kendall Square Research (KSR1) multiprocessor system, with 64 processors, at Oak Ridge National Laboratory. The numerical results indicate that the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm is attractive for NIKE3D under multi-processor system environments.

  2. Two decoupling methods for non-isothermal DSC results of AIBN decomposition.

    PubMed

    Zhang, Cai-Xing; Lu, Gui-Bin; Chen, Li-Ping; Chen, Wang-Hua; Peng, Min-Jun; Lv, Jia-Yu

    2015-03-21

    During thermal decomposition of azobisisobutyronitrile (AIBN), the endothermic process of phase transition disturbed exothermic decomposition, which brought deformation in its thermal graphs. Therefore, exact kinetic parameters of the decomposition could not be obtained by the existing kinetics analytic models, and the accurate enthalpy data of the decomposition and phase transition were not available. Two methods, i.e., a solvent method and a mathematical method, were introduced in this paper to resolve the coupling phenomenon. In the former method, AIBN was dissolved into aniline to eliminate the endothermic process and obtain curves of the liquid-state decomposition. In the latter method, MATLAB software was employed to get the "pure" exothermic decomposition curve without the influence of phase transition by fitting coupling curves within the section after the transition point and extrapolating to the initial stage of decomposition. Moreover, the kinetic parameters of the "pure" exothermic decomposition of AIBN obtained by the mathematical fitting agreed with the results from the solvent method, verifying the accuracy of the decoupling. The research is of great significance for comprehending the exact characteristics of thermal behaviors and safety parameters of AIBN. It also provides a great help to determine the safe operating temperature and alarm temperature for processes in industry.

  3. Nonlinear color-image decomposition for image processing of a digital color camera

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Aizawa, Haruya; Yamada, Daisuke; Komatsu, Takashi

    2009-01-01

    This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality color-image decomposition, which is very robust against colored random noise.

  4. Tensor network decompositions in the presence of a global symmetry

    SciTech Connect

    Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre

    2010-11-15

    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.

  5. Performance of the Wavelet Decomposition on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.

  6. Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors

    SciTech Connect

    Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep

    2014-12-01

    The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.

  7. Spectral Decomposition Using the CEEMD Method: A Case Study from the Carpathian Foredeep

    NASA Astrophysics Data System (ADS)

    Kwietniak, Anna; Cichostępski, Kamil; Kasperska, Monika

    2016-10-01

    The purpose of this work is to select the optimal spectral decomposition (SD) method for channel detection in the Miocene strata of the Carpathian Fordeep, SE Poland. For analysis, two spectral decomposition algorithms were tested on 3D seismic data: the first, based on Fast Fourier Transform (FFT), and second, on Complete Ensemble Empirical Mode Decomposition (CEEMD). Additionally the results of instantaneous frequency (IF) were compared with the results of peak frequency (PF) computed after the CEEMD. Both algorithms of SD enabled us to interpret channels, but the results are marginally different, i.e. the FFT shows more coarse, linear structures, that are desirable for channel interpretation, whereas the CEEMD does not highlight these structures as clearly and shows more, what the authors believe to be, noise.

  8. Fast structural design and analysis via hybrid domain decomposition on massively parallel processors

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1993-01-01

    A hybrid domain decomposition framework for static, transient and eigen finite element analyses of structural mechanics problems is presented. Its basic ingredients include physical substructuring and /or automatic mesh partitioning, mapping algorithms, 'gluing' approximations for fast design modifications and evaluations, and fast direct and preconditioned iterative solvers for local and interface subproblems. The overall methodology is illustrated with the structural design of a solar viewing payload that is scheduled to fly in March 1993. This payload has been entirely designed and validated by a group of undergraduate students at the University of Colorado using the proposed hybrid domain decomposition approach on a massively parallel processor. Performance results are reported on the CRAY Y-MP/8 and the iPSC-860/64 Touchstone systems, which represent both extreme parallel architectures. The hybrid domain decomposition methodology is shown to outperform leading solution algorithms and to exhibit an excellent parallel scalability.

  9. Kriging-Based Parameter Estimation Algorithm for Metabolic Networks Combined with Single-Dimensional Optimization and Dynamic Coordinate Perturbation.

    PubMed

    Wang, Hong; Wang, Xicheng; Li, Zheng; Li, Keqiu

    2016-01-01

    The metabolic network model allows for an in-depth insight into the molecular mechanism of a particular organism. Because most parameters of the metabolic network cannot be directly measured, they must be estimated by using optimization algorithms. However, three characteristics of the metabolic network model, i.e., high nonlinearity, large amount parameters, and huge variation scopes of parameters, restrict the application of many traditional optimization algorithms. As a result, there is a growing demand to develop efficient optimization approaches to address this complex problem. In this paper, a Kriging-based algorithm aiming at parameter estimation is presented for constructing the metabolic networks. In the algorithm, a new infill sampling criterion, named expected improvement and mutual information (EI&MI), is adopted to improve the modeling accuracy by selecting multiple new sample points at each cycle, and the domain decomposition strategy based on the principal component analysis is introduced to save computing time. Meanwhile, the convergence speed is accelerated by combining a single-dimensional optimization method with the dynamic coordinate perturbation strategy when determining the new sample points. Finally, the algorithm is applied to the arachidonic acid metabolic network to estimate its parameters. The obtained results demonstrate the effectiveness of the proposed algorithm in getting precise parameter values under a limited number of iterations.

  10. BEST POSSIBLE FLOATING POINT ARITHMETIC.

    DTIC Science & Technology

    The report presents an algorithm for floating point arithmetic, using single-length arithmetic registers, which yields the most accurate...approximation which can be expressed in the given floating point format, the greatest lower bound, or the least upper bound for the result of the operation

  11. Global patterns in litter decomposition: a synthesis.

    NASA Astrophysics Data System (ADS)

    Auch, W. E.; Ross, D. S.

    2007-12-01

    Leaf and coarse woody debris (LCWD) decay catalyzes the biochemical mechanisms of the soil-aboveground interface, and should be an important component of climate change models that address carbon and nitrogen. There is a clear need for the identification of determinant climate or litter chemistry parameters at the global scale. Local and global decay is commonly attributed to litter chemistry and climate, respectively. The objective of this synthesis was to illustrate LCWD decay across a global climate-chemistry continuum and contrast results with a previous assessment via both standard first-order (|k|) decay kinetics and gradient exponent values arranged in order of influence from initial to latter decay stages. Results suggest greater initial LCWD cation concentrations yielded the fastest initial rates of decomposition and most climatic indices appeared relevant at intermediate stages of decay. Elevation and refractory LCWD carbon (i.e. carbon, lignin, and tannins) were inversely correlated with decay, prolonging the process and possibly acting in concert as "end-point" determinants. Furthermore, the initial influence of nitrogen and phosphorus is universal across LCWD-type as well as ecoregion. Climate acts in a transitional role between easily solubilized and late or aromatic substrate decay. Global and continental carbon cycling assumptions and models must acknowledge: i) the influence of LCWD cation and N concentration during initial fragmentation, leaching, and transformation; ii) climate, specifically seasonal temperature averages > evapotranspiration > precipitation, during the interim; and iii) the ever-present influence of seasonality and litter aromatic components. Key Words: Leaf and Coarse Woody Debris (LCWD) decomposition, |k|, first-order kinetics, Carbon Cycle, Global Climate Change (GCC), Actual Evapotranspiration (AET).

  12. SAMPEX special pointing mode

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Flatley, Thomas W.; Leoutsakos, Theodore

    1995-01-01

    A new pointing mode has been developed for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) spacecraft. This pointing mode orients the instrument boresights perpendicular to the field lines of the Earth's magnetic field in regions of low field strength and parallel to the field lines in regions of high field strength, to allow better characterization of heavy ions trapped by the field. The new mode uses magnetometer signals and is algorithmically simpler than the previous control mode, but it requires increased momentum wheel activity. It was conceived, designed, tested, coded, uplinked to the spacecraft, and activated in less than seven months.

  13. Decomposition Technique for Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  14. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    SciTech Connect

    Weinstein, Marvin; Auerbach, Assa; Chandra, V.Ravi; /Technion

    2011-11-04

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. The lattice of size N is partitioned into two subclusters. At each iteration the Lanczos vector is projected into a set of n{sub svd} smaller subcluster vectors using singular value decomposition. For low entanglement entropy S{sub ee}, (satisfied by short range Hamiltonians), we expect the truncation error to vanish as exp(-n{sup 1/S{sub ee}}{sub svd}). Convergence is tested for the Heisenberg model on Kagome clusters of up to 36 sites, with no symmetries exploited, using less than 15GB of memory. Generalization to multiple partitioning is discussed.

  15. Calculating vibrational spectra of molecules using tensor train decomposition

    NASA Astrophysics Data System (ADS)

    Rakhuba, Maxim; Oseledets, Ivan

    2016-09-01

    We propose a new algorithm for calculation of vibrational spectra of molecules using tensor train decomposition. Under the assumption that eigenfunctions lie on a low-parametric manifold of low-rank tensors we suggest using well-known iterative methods that utilize matrix inversion (locally optimal block preconditioned conjugate gradient method, inverse iteration) and solve corresponding linear systems inexactly along this manifold. As an application, we accurately compute vibrational spectra (84 states) of acetonitrile molecule CH3CN on a laptop in one hour using only 100 MB of memory to represent all computed eigenfunctions.

  16. Denoising of ECG signal during spaceflight using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Li, Zhuo; Wang, Li

    2009-12-01

    The Singular Value Decomposition (SVD) method is introduced to denoise the ECG signal during spaceflight. The theory base of SVD method is given briefly. The denoising process of the strategy is presented combining a segment of real ECG signal. We improve the algorithm of calculating Singular Value Ratio (SVR) spectrum, and propose a constructive approach of analysis characteristic patterns. We reproduce the ECG signal very well and compress the noise effectively. The SVD method is proved to be suitable for denoising the ECG signal.

  17. Generalized generating function with tucker decomposition and alternating least squares for underdetermined blind identification

    NASA Astrophysics Data System (ADS)

    Gu, Fanglin; Zhang, Hang; Wang, Wenwu; Zhu, Desheng

    2013-12-01

    Generating function (GF) has been used in blind identification for real-valued signals. In this paper, the definition of GF is first generalized for complex-valued random variables in order to exploit the statistical information carried on complex signals in a more effective way. Then an algebraic structure is proposed to identify the mixing matrix from underdetermined mixtures using the generalized generating function (GGF). Two methods, namely GGF-ALS and GGF-TALS, are developed for this purpose. In the GGF-ALS method, the mixing matrix is estimated by the decomposition of the tensor constructed from the Hessian matrices of the GGF of the observations, using an alternating least squares (ALS) algorithm. The GGF-TALS method is an improved version of the GGF-ALS algorithm based on Tucker decomposition. More specifically, the original tensor, as formed in GGF-ALS, is first converted to a lower-rank core tensor using the Tucker decomposition, where the factors are obtained by the left singular-value decomposition of the original tensor's mode-3 matrix. Then the mixing matrix is estimated by decomposing the core tensor with the ALS algorithm. Simulation results show that (a) the proposed GGF-ALS and GGF-TALS approaches have almost the same performance in terms of the relative errors, whereas the GGF-TALS has much lower computational complexity, and (b) the proposed GGF algorithms have superior performance to the latest GF-based baseline approaches.

  18. Internal labelling problem: an algorithmic procedure

    NASA Astrophysics Data System (ADS)

    Campoamor-Stursberg, Rutwig

    2011-01-01

    Combining the decomposition of Casimir operators induced by the embedding of a subalgebra into a semisimple Lie algebra with the properties of commutators of subgroup scalars, an analytical algorithm for the computation of missing label operators with the commutativity requirement is proposed. Two new criteria for subgroups scalars to commute are given. The algorithm is completed with a recursive method to construct orthonormal bases of states. As examples to illustrate the procedure, four labelling problems are explicitly studied.

  19. On a concurrent element-by-element preconditioned conjugate gradient algorithm for multiple load cases

    NASA Technical Reports Server (NTRS)

    Watson, Brian; Kamat, M. P.

    1990-01-01

    Element-by-element preconditioned conjugate gradient (EBE-PCG) algorithms have been advocated for use in parallel/vector processing environments as being superior to the conventional LDL(exp T) decomposition algorithm for single load cases. Although there may be some advantages in using such algorithms for a single load case, when it comes to situations involving multiple load cases, the LDL(exp T) decomposition algorithm would appear to be decidedly more cost-effective. The authors have outlined an EBE-PCG algorithm suitable for multiple load cases and compared its effectiveness to the highly efficient LDL(exp T) decomposition scheme. The proposed algorithm offers almost no advantages over the LDL(exp T) algorithm for the linear problems investigated on the Alliant FX/8. However, there may be some merit in the algorithm in solving nonlinear problems with load incrementation, but that remains to be investigated.

  20. Bio-empirical mode decomposition: visible and infrared fusion using biologically inspired empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Sissinto, Paterne; Ladeji-Osias, Jumoke

    2013-07-01

    Bio-EMD, a biologically inspired fusion of visible and infrared (IR) images based on empirical mode decomposition (EMD) and color opponent processing, is introduced. First, registered visible and IR captures of the same scene are decomposed into intrinsic mode functions (IMFs) through EMD. The fused image is then generated by an intuitive opponent processing the source IMFs. The resulting image is evaluated based on the amount of information transferred from the two input images, the clarity of details, the vividness of depictions, and range of meaningful differences in lightness and chromaticity. We show that this opponent processing-based technique outperformed other algorithms based on pixel intensity and multiscale techniques. Additionally, Bio-EMD transferred twice the information to the fused image compared to other methods, providing a higher level of sharpness, more natural-looking colors, and similar contrast levels. These results were obtained prior to optimization of color opponent processing filters. The Bio-EMD algorithm has potential applicability in multisensor fusion covering visible bands, forensics, medical imaging, remote sensing, natural resources management, etc.

  1. The design and implementation of signal decomposition system of CL multi-wavelet transform based on DSP builder

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Wang, Zhihui

    2015-12-01

    With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.

  2. Decomposition Rate and Pattern in Hanging Pigs.

    PubMed

    Lynch-Aird, Jeanne; Moffatt, Colin; Simmons, Tal

    2015-09-01

    Accurate prediction of the postmortem interval requires an understanding of the decomposition process and the factors acting upon it. A controlled experiment, over 60 days at an outdoor site in the northwest of England, used 20 freshly killed pigs (Sus scrofa) as human analogues to study decomposition rate and pattern. Ten pigs were hung off the ground and ten placed on the surface. Observed differences in the decomposition pattern required a new decomposition scoring scale to be produced for the hanging pigs to enable comparisons with the surface pigs. The difference in the rate of decomposition between hanging and surface pigs was statistically significant (p=0.001). Hanging pigs reached advanced decomposition stages sooner, but lagged behind during the early stages. This delay is believed to result from lower variety and quantity of insects, due to restricted beetle access to the aerial carcass, and/or writhing maggots falling from the carcass.

  3. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  4. Cooperative terrain model acquisition by two point-robots in planar polygonal terrains

    SciTech Connect

    Rao, N.S.V.; Protopopescu, V.

    1994-11-29

    We address the model acquisition problem for an unknown terrain by a team of two robots. The terrain may be cluttered by a finite number of polygonal obstacles with unknown shapes and positions. The robots are point-sized and equipped with visual sensors which acquire all visible parts of the terrain by scanning from their locations. The robots communicate with each other via wireless connection. The performance is measured by the number of the sensor (scan) operations which are assumed to be the most time-consuming/expensive of all the robot operations. We employ the restricted visibility graph methods in a hierarchiacal setup. For terrains with convex obstacles, the sensing time can be halved compared to a single robot implementation. For terrains with concave corners, the performance of the algorithm depends on the number of concave regions and their depths. A hierarchical decomposition of the restricted visibility graph into 2-connected components and trees is considered. Performance for the 2-robot team is expressed in terms of sizes of 2-connected components, and the sizes and diameters of the trees. The proposed algorithm and analysis can be applied to the methods based on Voronoi diagram and trapezoidal decomposition.

  5. Conductimetric determination of decomposition of silicate melts

    NASA Technical Reports Server (NTRS)

    Kroeger, C.; Lieck, K.

    1986-01-01

    A description of a procedure is given to detect decomposition of silicate systems in the liquid state by conductivity measurements. Onset of decomposition can be determined from the temperature curves of resistances measured on two pairs of electrodes, one above the other. Degree of decomposition can be estimated from temperature and concentration dependency of conductivity of phase boundaries. This procedure was tested with systems PbO-B2O3 and PbO-B2O3-SiO2.

  6. Measurement System for Energetic Materials Decomposition

    DTIC Science & Technology

    2015-01-05

    Measurement System for Energetic Materials Decomposition This DURIP grant was used to purchase: 1. Q600 SDT Simultaneous DSC-TGA 2... Decomposition Report Title This DURIP grant was used to purchase: 1. Q600 SDT Simultaneous DSC-TGA 2. Pfeiffer Vacuum Benchtop Thermostar Mass...Spectrometer 3. Vision Research Phantom V12.1-8G-M high speed camera These instruments have been used to evaluate and study decomposition and

  7. Improving radiation data quality of USDA UV-B monitoring and research program and evaluating UV decomposition in DayCent and its ecological impacts

    NASA Astrophysics Data System (ADS)

    Chen, Maosi

    from an improved cloud screening algorithm that utilizes an iterative rejection of cloudy points based on a decreasing tolerance of unstable optical depth behavior when calibration information is unknown. A MODTRAN radiative transfer model simulation showed the new cloud screening algorithm was capable of screening cloudy points while retaining clear-sky points. The comparison results showed that the cloud-free points determined by the new cloud screening algorithm generated significantly (56%) more and unbiased Langley offset voltages (VLOs) for both partly cloudy days and sunny days at two testing sites, Hawaii and Florida. The V¬LOs are proportional to the radiometric sensitivity. The stability of the calibration is also improved by the development of a two-stage reference channel calibration method for collocated UV-MFRSR and MFRSR instruments. Special channels where aerosol is the only contributor to total optical depth (TOD) variation (e.g. 368-nm channel) were selected and the radiative transfer model (MODTRAN) used to calculate direct normal and diffuse horizontal ratios which were used to evaluate the stability of TOD in cloud-free points. The spectral dependence of atmospheric constituents' optical properties and previously calibrated channels were used to find stable TOD points and perform Langley calibration at spectrally adjacent channels. The test of this method on the UV-B program site at Homestead, Florida (FL02) showed that the new method generated more clustered and abundant VLOs at all (UV-) MFRSR channels and potentially improved the accuracy by 2-4% at most channels and over 10% at 300-nm and 305-nm channels. In the second major part of this work, I calibrated the DayCent-UV model with ecosystem variables (e.g. soil water, live biomass), allowed maximum photodecay rate to vary with litter's initial lignin fraction in the model, and validated the optimized model with LIDET observation of remaining carbon and nitrogen at three semi-arid sites. I

  8. On Schubert decompositions of quiver Grassmannians

    NASA Astrophysics Data System (ADS)

    Lorscheid, Oliver

    2014-02-01

    In this paper, we introduce Schubert decompositions for quiver Grassmannians and investigate certain classes of quiver Grassmannians with a Schubert decomposition into affine spaces. The main theorem puts the cells of a Schubert decomposition into relation to the cells of a certain simpler quiver Grassmannian. This allows us to extend known examples of Schubert decompositions into affine spaces to a larger class of quiver Grassmannians. This includes exceptional representations of the Kronecker quiver as well as representations of forests with block matrices of the form (0100). Finally, we draw conclusions on the Euler characteristics and the cohomology of quiver Grassmannians.

  9. On symmetric decompositions of positive operators

    NASA Astrophysics Data System (ADS)

    Anastasia Jivulescu, Maria; Nechita, Ion; Găvruţa, Paşc

    2017-04-01

    We present results concerning decompositions of positive operators acting on finite-dimensional Hilbert spaces. Our motivation is the study of a generalized version of the SIC–POVM problem, which has applications to Quantum Information Theory. We relax some of the conditions in the SIC–POVM setting (the elements sum up to the identity, resp. the elements have unit rank), and we focus on equiangular decompositions (the elements of the decomposition should have the same length, and pairs of distinct elements should have constant angles). We characterize all such decompositions, comparing our results with the case of SIC–POVMs. We also generalize some existing Welch-type inequalities.

  10. Energy decomposition analysis in an adiabatic picture.

    PubMed

    Mao, Yuezhi; Horn, Paul R; Head-Gordon, Martin

    2017-02-22

    Energy decomposition analysis (EDA) of electronic structure calculations has facilitated quantitative understanding of diverse intermolecular interactions. Nevertheless, such analyses are usually performed at a single geometry and thus decompose a "single-point" interaction energy. As a result, the influence of the physically meaningful EDA components on the molecular structure and other properties are not directly obtained. To address this gap, the absolutely localized molecular orbital (ALMO)-EDA is reformulated in an adiabatic picture, where the frozen, polarization, and charge transfer energy contributions are defined as energy differences between the stationary points on different potential energy surfaces (PESs), which are accessed by geometry optimizations at the frozen, polarized and fully relaxed levels of density functional theory (DFT). Other molecular properties such as vibrational frequencies can thus be obtained at the stationary points on each PES. We apply the adiabatic ALMO-EDA to different configurations of the water dimer, the water-Cl(-) and water-Mg(2+)/Ca(2+) complexes, metallocenes (Fe(2+), Ni(2+), Cu(2+), Zn(2+)), and the ammonia-borane complex. This method appears to be very useful for unraveling how physical effects such as polarization and charge transfer modulate changes in molecular properties induced by intermolecular interactions. As an example of the insight obtained, we find that a linear hydrogen bond geometry for the water dimer is preferred even without the presence of polarization and charge transfer, while the red shift in the OH stretch frequency is primarily a charge transfer effect; by contrast, a near-linear geometry for the water-chloride hydrogen bond is achieved only when charge transfer is allowed.

  11. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  12. Complex variational mode decomposition for signal processing applications

    NASA Astrophysics Data System (ADS)

    Wang, Yanxue; Liu, Fuyun; Jiang, Zhansi; He, Shuilong; Mo, Qiuyun

    2017-03-01

    Complex-valued signals occur in many areas of science and engineering and are thus of fundamental interest. The complex variational mode decomposition (CVMD) is proposed as a natural and a generic extension of the original VMD algorithm for the analysis of complex-valued data in this work. Moreover, the equivalent filter bank structure of the CVMD in the presence of white noise, and the effects of initialization of center frequency on the filter bank property are both investigated via numerical experiments. Benefiting from the advantages of CVMD algorithm, its bi-directional Hilbert time-frequency spectrum is developed as well, in which the positive and negative frequency components are formulated on the positive and negative frequency planes separately. Several applications in the real-world complex-valued signals support the analysis.

  13. Reducing memory cost of exact diagonalization using singular value decomposition.

    PubMed

    Weinstein, Marvin; Auerbach, Assa; Chandra, V Ravi

    2011-11-01

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements, without restricting to variational ansatzes. The lattice of size N is partitioned into two subclusters. At each iteration the Lanczos vector is projected into two sets of n(svd) smaller subcluster vectors using singular value decomposition. For low entanglement entropy S(ee), (satisfied by short-range Hamiltonians), the truncation error is expected to vanish as exp(-n(svd)(1/S(ee))). Convergence is tested for the Heisenberg model on Kagomé clusters of 24, 30, and 36 sites, with no lattice symmetries exploited, using less than 15 GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given.

  14. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa

    2012-02-01

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007

  15. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is

  16. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  17. Parquet decomposition calculations of the electronic self-energy

    NASA Astrophysics Data System (ADS)

    Gunnarsson, O.; Schäfer, T.; LeBlanc, J. P. F.; Merino, J.; Sangiovanni, G.; Rohringer, G.; Toschi, A.

    2016-06-01

    The parquet decomposition of the self-energy into classes of diagrams, those associated with specific scattering processes, can be exploited for different scopes. In this work, the parquet decomposition is used to unravel the underlying physics of nonperturbative numerical calculations. We show the specific example of dynamical mean field theory and its cluster extensions [dynamical cluster approximation (DCA)] applied to the Hubbard model at half-filling and with hole doping: These techniques allow for a simultaneous determination of two-particle vertex functions and self-energies and, hence, for an essentially "exact" parquet decomposition at the single-site or at the cluster level. Our calculations show that the self-energies in the underdoped regime are dominated by spin-scattering processes, consistent with the conclusions obtained by means of the fluctuation diagnostics approach [O. Gunnarsson et al., Phys. Rev. Lett. 114, 236402 (2015), 10.1103/PhysRevLett.114.236402]. However, differently from the latter approach, the parquet procedure displays important changes with increasing interaction: Even for relatively moderate couplings, well before the Mott transition, singularities appear in different terms, with the notable exception of the predominant spin channel. We explain precisely how these singularities, which partly limit the utility of the parquet decomposition and, more generally, of parquet-based algorithms, are never found in the fluctuation diagnostics procedure. Finally, by a more refined analysis, we link the occurrence of the parquet singularities in our calculations to a progressive suppression of charge fluctuations and the formation of a resonance valence bond state, which are typical hallmarks of a pseudogap state in DCA.

  18. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  19. Efficient Algorithm for Rectangular Spiral Search

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Breckenridge, William

    2008-01-01

    An algorithm generates grid coordinates for a computationally efficient spiral search pattern covering an uncertain rectangular area spanned by a coordinate grid. The algorithm does not require that the grid be fixed; the algorithm can search indefinitely, expanding the grid and spiral, as needed, until the target of the search is found. The algorithm also does not require memory of coordinates of previous points on the spiral to generate the current point on the spiral.

  20. Small infrared target detection based on harmonic and sparse matrix decomposition

    NASA Astrophysics Data System (ADS)

    Zheng, Cheng-yong; Li, Hong

    2013-06-01

    Background suppressing is the main technology for infrared target detection. We present a new small infrared target detection (SIRTD) method that is also based on background suppressing. First, a new matrix decomposition model, named harmonic and sparse matrix decomposition (HSMD), is put forward for decomposing an image into a harmonic and a sparse component, which are seen as a background component and a small target component, respectively. Then, an algorithm based on augmented Lagrangian alternating direction method (ALADM) for solving HSMD is described. The main computational cost of the proposed algorithm in each iteration is that of a fast Fourier transform (FFT), which makes the proposed algorithm very fast. By searching for the maximum local energy regions in the target component, the infrared targets can be easily and accurately located. Experimental results on some infrared images show that HSMD solved by ALADM is very suitable for real-time infrared image decomposing and SIRTD.

  1. Simplified approaches to some nonoverlapping domain decomposition methods

    SciTech Connect

    Xu, Jinchao

    1996-12-31

    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  2. Multi-material decomposition of spectral CT images

    NASA Astrophysics Data System (ADS)

    Mendonça, Paulo R. S.; Bhotika, Rahul; Maddah, Mahnaz; Thomsen, Brian; Dutta, Sandeep; Licato, Paul E.; Joshi, Mukta C.

    2010-04-01

    Spectral Computed Tomography (Spectral CT), and in particular fast kVp switching dual-energy computed tomography, is an imaging modality that extends the capabilities of conventional computed tomography (CT). Spectral CT enables the estimation of the full linear attenuation curve of the imaged subject at each voxel in the CT volume, instead of a scalar image in Hounsfield units. Because the space of linear attenuation curves in the energy ranges of medical applications can be accurately described through a two-dimensional manifold, this decomposition procedure would be, in principle, limited to two materials. This paper describes an algorithm that overcomes this limitation, allowing for the estimation of N-tuples of material-decomposed images. The algorithm works by assuming that the mixing of substances and tissue types in the human body has the physicochemical properties of an ideal solution, which yields a model for the density of the imaged material mix. Under this model the mass attenuation curve of each voxel in the image can be estimated, immediately resulting in a material-decomposed image triplet. Decomposition into an arbitrary number of pre-selected materials can be achieved by automatically selecting adequate triplets from an application-specific material library. The decomposition is expressed in terms of the volume fractions of each constituent material in the mix; this provides for a straightforward, physically meaningful interpretation of the data. One important application of this technique is in the digital removal of contrast agent from a dual-energy exam, producing a virtual nonenhanced image, as well as in the quantification of the concentration of contrast observed in a targeted region, thus providing an accurate measure of tissue perfusion.

  3. Ground point filtering of UAV-based photogrammetric point clouds

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Seijmonsbergen, Arie; Masselink, Rens; Keesstra, Saskia

    2016-04-01

    Unmanned Aerial Vehicles (UAVs) have proved invaluable for generating high-resolution and multi-temporal imagery. Based on photographic surveys, 3D surface reconstructions can be derived photogrammetrically so producing point clouds, orthophotos and surface models. For geomorphological or ecological applications it may be necessary to separate ground points from vegetation points. Existing filtering methods are designed for point clouds derived using other methods, e.g. laser scanning. The purpose of this paper is to test three filtering algorithms for the extraction of ground points from point clouds derived from low-altitude aerial photography. Three subareas were selected from a single flight which represent different scenarios: 1) low relief, sparsely vegetated area, 2) low relief, moderately vegetated area, 3) medium relief and moderately vegetated area. The three filtering methods are used to classify ground points in different ways, based on 1) RGB color values from training samples, 2) TIN densification as implemented in LAStools, and 3) an iterative surface lowering algorithm. Ground points are then interpolated into a digital terrain model using inverse distance weighting. The results suggest that different landscapes require different filtering methods for optimal ground point extraction. While iterative surface lowering and TIN densification are fully automated, color-based classification require fine-tuning in order to optimize the filtering results. Finally, we conclude that filtering photogrammetric point clouds could provide a cheap alternative to laser scan surveys for creating digital terrain models in sparsely vegetated areas.

  4. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  5. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    PubMed

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  6. Theoretical study of the decomposition pathways and products of C5- perfluorinated ketone (C5 PFK)

    NASA Astrophysics Data System (ADS)

    Fu, Yuwei; Wang, Xiaohua; Li, Xi; Yang, Aijun; Han, Guohui; Lu, Yanhui; Wu, Yi; Rong, Mingzhe

    2016-08-01

    Due to the high global warming potential (GWP) and increasing environmental concerns, efforts on searching the alternative gases to SF6, which is predominantly used as insulating and interrupting medium in high-voltage equipment, have become a hot topic in recent decades. Overcoming the drawbacks of the existing candidate gases, C5- perfluorinated ketone (C5 PFK) was reported as a promising gas with remarkable insulation capacity and the low GWP of approximately 1. Experimental measurements of the dielectric strength of this novel gas and its mixtures have been carried out, but the chemical decomposition pathways and products of C5 PFK during breakdown are still unknown, which are the essential factors in evaluating the electric strength of this gas in high-voltage equipment. Therefore, this paper is devoted to exploring all the possible decomposition pathways and species of C5 PFK by density functional theory (DFT). The structural optimizations, vibrational frequency calculations and energy calculations of the species involved in a considered pathway were carried out with DFT-(U)B3LYP/6-311G(d,p) method. Detailed potential energy surface was then investigated thoroughly by the same method. Lastly, six decomposition pathways of C5 PFK decomposition involving fission reactions and the reactions with a transition states were obtained. Important intermediate products were also determined. Among all the pathways studied, the favorable decomposition reactions of C5 PFK were found, involving C-C bond ruptures producing Ia and Ib in pathway I, followed by subsequent C-C bond ruptures and internal F atom transfers in the decomposition of Ia and Ib presented in pathways II + III and IV + V, respectively. Possible routes were pointed out in pathway III and lead to the decomposition of IIa, which is the main intermediate product found in pathway II of Ia decomposition. We also investigated the decomposition of Ib, which can undergo unimolecular reactions to give the formation

  7. Metallo-organic decomposition films

    NASA Technical Reports Server (NTRS)

    Gallagher, B. D.

    1985-01-01

    A summary of metallo-organic deposition (MOD) films for solar cells was presented. The MOD materials are metal ions compounded with organic radicals. The technology is evolving quickly for solar cell metallization. Silver compounds, especially silver neodecanoate, were developed which can be applied by thick-film screening, ink-jet printing, spin-on, spray, or dip methods. Some of the advantages of MOD are: high uniform metal content, lower firing temperatures, decomposition without leaving a carbon deposit or toxic materials, and a film that is stable under ambient conditions. Molecular design criteria were explained along with compounds formulated to date, and the accompanying reactions for these compounds. Phase stability and the other experimental and analytic results of MOD films were presented.

  8. Sampling Stoichiometry: The Decomposition of Hydrogen Peroxide.

    ERIC Educational Resources Information Center

    Clift, Philip A.

    1992-01-01

    Describes a demonstration of the decomposition of hydrogen peroxide to provide an interesting, quantitative illustration of the stoichiometric relationship between the decomposition of hydrogen peroxide and the formation of oxygen gas. This 10-minute demonstration uses ordinary hydrogen peroxide and yeast that can be purchased in a supermarket.…

  9. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  10. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  11. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  12. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  13. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  14. Chinese Orthographic Decomposition and Logographic Structure

    ERIC Educational Resources Information Center

    Cheng, Chao-Ming; Lin, Shan-Yuan

    2013-01-01

    "Chinese orthographic decomposition" refers to a sense of uncertainty about the writing of a well-learned Chinese character following a prolonged inspection of the character. This study investigated the decomposition phenomenon in a test situation in which Chinese characters were repeatedly presented in a word context and assessed…

  15. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  16. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  17. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  18. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  19. English and Turkish Pupils' Understanding of Decomposition

    ERIC Educational Resources Information Center

    Cetin, Gulcan

    2007-01-01

    This study aimed to describe seventh grade English and Turkish students' levels of understanding of decomposition. Data were analyzed descriptively from the students' written responses to four diagnostic questions about decomposition. Results revealed that the English students had considerably higher sound understanding and lower no understanding…

  20. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  1. Helmholtz Hodge decomposition of scalar optical fields.

    PubMed

    Bahl, Monika; Senthilkumaran, P

    2012-11-01

    It is shown that the vector field decomposition method, namely, the Helmholtz Hodge decomposition, can also be applied to analyze scalar optical fields that are ubiquitously present in interference and diffraction optics. A phase gradient field that depicts the propagation and Poynting vector directions can hence be separated into solenoidal and irrotational components.

  2. Regular Decompositions for H(div) Spaces

    SciTech Connect

    Kolev, Tzanio; Vassilevski, Panayot

    2012-01-01

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  3. Metallo-Organic Decomposition (MOD) film development

    NASA Technical Reports Server (NTRS)

    Parker, J.

    1986-01-01

    The processing techniques and problems encountered in formulating metallo-organic decomposition (MOD) films used in contracting structures for thin solar cells are described. The use of thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) techniques performed at Jet Propulsion Laboratory (JPL) in understanding the decomposition reactions lead to improvements in process procedures. The characteristics of the available MOD films were described in detail.

  4. A global HMX decomposition model

    SciTech Connect

    Hobbs, M.L.

    1996-12-01

    HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) decomposes by competing reaction pathways to form various condensed and gas-phase intermediate and final products. Gas formation is related to the development of nonuniform porosity and high specific surface areas prior to ignition in cookoff events. Such thermal damage enhances shock sensitivity and favors self-supported accelerated burning. The extent of HMX decomposition in highly confined cookoff experiments remains a major unsolved experimental and modeling problem. The present work is directed at determination of global HMX kinetics useful for predicting the elapsed time to thermal runaway (ignition) and the extent of decomposition at ignition. Kinetic rate constants for a six step engineering based global mechanism were obtained using gas formation rates measured by Behrens at Sandia National Laboratories with his Simultaneous Modulated Beam Mass Spectrometer (STMBMS) experimental apparatus. The six step global mechanism includes competition between light gas (H[sub 2]Awe, HCN, CO, H[sub 2]CO, NO, N[sub 2]Awe) and heavy gas (C[sub 2]H[sub 6]N[sub 2]Awe and C[sub 4]H[sub 10]N0[sub 2]) formation with zero order sublimation of HMX and the mononitroso analog of HMX (mn-HMX), C[sub 4]H[sub 8]N[sub 8]Awe[sub 7]. The global mechanism was applied to the highly confined, One Dimensional Time to eXplosion (ODTX) experiment and hot cell experiments by suppressing the sublimation of HMX and mn-HMX. An additional gas-phase reaction was also included to account for the gas-phase reaction of N[sub 2]Awe with H[sub 2]CO. Predictions compare adequately to the STMBMS data, ODTX data, and hot cell data. Deficiencies in the model and future directions are discussed.

  5. Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition

    SciTech Connect

    Zhang, Zheng; Yang, Xiu; Oseledets, Ivan V.; Karniadakis, George E.; Daniel, Luca

    2015-01-01

    Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.

  6. Multilinear operators for higher-order decompositions.

    SciTech Connect

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  7. A unified statistical framework for material decomposition using multienergy photon counting x-ray detectors

    SciTech Connect

    Choi, Jiyoung; Kang, Dong-Goo; Kang, Sunghoon; Sung, Younghun; Ye, Jong Chul

    2013-09-15

    Purpose: Material decomposition using multienergy photon counting x-ray detectors (PCXD) has been an active research area over the past few years. Even with some success, the problem of optimal energy selection and three material decomposition including malignant tissue is still on going research topic, and more systematic studies are required. This paper aims to address this in a unified statistical framework in a mammographic environment.Methods: A unified statistical framework for energy level optimization and decomposition of three materials is proposed. In particular, an energy level optimization algorithm is derived using the theory of the minimum variance unbiased estimator, and an iterative algorithm is proposed for material composition as well as system parameter estimation under the unified statistical estimation framework. To verify the performance of the proposed algorithm, the authors performed simulation studies as well as real experiments using physical breast phantom and ex vivo breast specimen. Quantitative comparisons using various performance measures were conducted, and qualitative performance evaluations for ex vivo breast specimen were also performed by comparing the ground-truth malignant tissue areas identified by radiologists.Results: Both simulation and real experiments confirmed that the optimized energy bins by the proposed method allow better material decomposition quality. Moreover, for the specimen thickness estimation errors up to 2 mm, the proposed method provides good reconstruction results in both simulation and real ex vivo breast phantom experiments compared to existing methods.Conclusions: The proposed statistical framework of PCXD has been successfully applied for the energy optimization and decomposition of three material in a mammographic environment. Experimental results using the physical breast phantom and ex vivo specimen support the practicality of the proposed algorithm.

  8. Drought and detritivores determine leaf litter decomposition in calcareous streams of the Ebro catchment (Spain).

    PubMed

    Monroy, Silvia; Menéndez, Margarita; Basaguren, Ana; Pérez, Javier; Elosegi, Arturo; Pozo, Jesús

    2016-12-15

    Drought, an important environmental factor affecting the functioning of stream ecosystems, is likely to become more prevalent in the Mediterranean region as a consequence of climate change and enhanced water demand. Drought can have profound impacts on leaf litter decomposition, a key ecosystem process in headwater streams, but there is still limited information on its effects at the regional scale. We measured leaf litter decomposition across a gradient of aridity in the Ebro River basin. We deployed coarse- and fine-mesh bags with alder and oak leaves in 11 Mediterranean calcareous streams spanning a range of over 400km, and determined changes in discharge, water quality, leaf-associated macroinvertebrates, leaf quality and decomposition rates. The study streams were subject to different degrees of drought, specific discharge (Ls(-1)km(-2)) ranging from 0.62 to 9.99. One of the streams dried out during the experiment, another one reached residual flow, whereas the rest registered uninterrupted flow but with different degrees of flow variability. Decomposition rates differed among sites, being lowest in the 2 most water-stressed sites, but showed no general correlation with specific discharge. Microbial decomposition rates were not correlated with final nutrient content of litter nor to fungal biomass. Total decomposition rate of alder was positively correlated to the density and biomass of shredders; that of oak was not. Shredder density in alder bags showed a positive relationship with specific discharge during the decomposition experiment. Overall, the results point to a complex pattern of litter decomposition at the regional scale, as drought affects decomposition directly by emersion of bags and indirectly by affecting the functional composition and density of detritivores.

  9. Factors controlling bark decomposition and its role in wood decomposition in five tropical tree species.

    PubMed

    Dossa, Gbadamassi G O; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D

    2016-10-04

    Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11-1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition.

  10. Factors controlling bark decomposition and its role in wood decomposition in five tropical tree species

    PubMed Central

    Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.

    2016-01-01

    Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461

  11. Management intensity alters decomposition via biological pathways

    USGS Publications Warehouse

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future

  12. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  13. Cardiac video analysis using Hodge-Helmholtz field decomposition.

    PubMed

    Guo, Qinghong; Mandal, Mrinal K; Liu, Gang; Kavanagh, Katherine M

    2006-01-01

    The critical points (also known as phase singularities) in the heart reflect the pathological change of the heart tissue, and hence can be used to describe and analyze the dynamics of the cardiac electrical activity. As a result, the detection of these critical points can lead to correct understanding and effective therapy of the tachycardia. In this paper, we propose a novel approach to address this problem. The proposed approach includes four stages: image smoothing, motion estimation, motion decomposition, and detection of the critical points. In the image smoothing stage, the noisy cardiac optical data are smoothed using anisotropic diffusion equation. The conduction velocity fields of the cardiac electrical patterns can then be estimated from two consecutive smoothed images. Using the recently developed discrete Hodge-Helmholtz motion decomposition technique, the curl-free and divergence-free potential surfaces of an estimated velocity field are extracted. Finally, hierarchically searching the minima and maxima on the potential surfaces, the sources, sinks, and rotational centers are located with high accuracy. Experimental results with four real cardiac videos show that the proposed approach performs satisfactorily, especially for the cardiac electrical patterns with simple propagations.

  14. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  15. The effect of body size on the rate of decomposition in a temperate region of South Africa.

    PubMed

    Sutherland, A; Myburgh, J; Steyn, M; Becker, P J

    2013-09-10

    Forensic anthropologists rely on the state of decomposition of a body to estimate the post-mortem-interval (PMI) which provides information about the natural events and environmental forces that could have affected the remains after death. Various factors are known to influence the rate of decomposition, among them temperature, rainfall and exposure of the body. However, conflicting reports appear in the literature on the effect of body size on the rate of decay. The aim of this project was to compare decomposition rates of large pigs (Sus scrofa; 60-90 kg), with that of small pigs (<35 kg), to assess the influence of body size on decomposition rates. For the decomposition rates of small pigs, 15 piglets were assessed three times per week over a period of three months during spring and early summer. Data collection was conducted until complete skeletonization occurred. Stages of decomposition were scored according to separate categories for each anatomical region, and the point values for each region were added to determine the total body score (TBS), which represents the overall stage of decomposition for each pig. For the large pigs, data of 15 pigs were used. Scatter plots illustrating the relationships between TBS and PMI as well as TBS and accumulated degree days (ADD) were used to assess the pattern of decomposition and to compare decomposition rates between small and large pigs. Results indicated that rapid decomposition occurs during the early stages of decomposition for both samples. Large pigs showed a plateau phase in the course of advanced stages of decomposition, during which decomposition was minimal. A similar, but much shorter plateau was reached by small pigs of >20 kg at a PMI of 20-25 days, after which decomposition commenced swiftly. This was in contrast to the small pigs of <20 kg, which showed no plateau phase and their decomposition rates were swift throughout the duration of the study. Overall, small pigs decomposed 2.82 times faster than

  16. Decomposition of cellulose by ultrasonic welding in water

    NASA Astrophysics Data System (ADS)

    Nomura, Shinfuku; Miyagawa, Seiya; Mukasa, Shinobu; Toyota, Hiromichi

    2016-07-01

    The use of ultrasonic welding in water to decompose cellulose placed in water was examined experimentally. Filter paper was used as the decomposition material with a horn-type transducer 19.5 kHz adopted as the ultrasonic welding power source. The frictional heat at the point where the surface of the tip of the ultrasonic horn contacts the filter paper decomposes the cellulose in the filter paper into 5-hydroxymethylfurfural (5-HMF), furfural, and oligosaccharide through hydrolysis and thermolysis that occurs in the welding process.

  17. Decomposition is always temperature dependent, except when its not

    NASA Astrophysics Data System (ADS)

    Davidson, E. A.

    2011-12-01

    Understanding of the temperature dependence of decomposition of soil organic matter has been complicated by the two following facts: (1) all enzymatic activity, including biologically mediated breakdown of organic matter in soils, is temperature dependent; and (2) much of the organic matter in soils is effectively isolated from enzymatic activity, either in space or time, through a wide variety of environmental constraints, including physical and chemical protection, spatial heterogeneity, lack of oxygen, or sub-zero temperatures. Because of the second fact, the first has been questioned in papers that report lack of observed temperature sensitivity of decomposition of soil organic matter. In my 2006 review paper with Ivan Janssens, we attempted to clarify these facts and their interactions and why temperature dependence is sometimes observed and sometimes not. However, it appears that our discussion of how Arrhenius kinetics affects enzymatic activity has become the paper's main recognized legacy, and it has been cited in support of the "carbon-quality-temperature" hypothesis. Here I will update and clarify aspects of that review as follows: (1) a Dual Arrhenius Michaelis-Menten (DAMM) model that merges these kinetic models with substrate diffusion processes can parsimoniously and mechanistically explain fast responses of carbon metabolism in soils as temperature and water content vary over time scales of minutes to months; and (2) variations in activation energies of enzymatic reactions have little or no effect on C metabolism when substrate is not available to enzymes, and this second point applies to both short and long-term turnover of soil organic matter. Because of this latter point, mean residence times and decomposition constants often do not correlate well with the chemical structure ("carbon quality") of soil organic matter, as is predicted by Arrhenius kinetics alone. While it is true that biological decomposition reactions, when they occur, are always

  18. Non-conformal domain decomposition methods for time-harmonic Maxwell equations

    PubMed Central

    Shao, Yang; Peng, Zhen; Lim, Kheng Hwee; Lee, Jin-Fa

    2012-01-01

    We review non-conformal domain decomposition methods (DDMs) and their applications in solving electrically large and multi-scale electromagnetic (EM) radiation and scattering problems. In particular, a finite-element DDM, together with a finite-element tearing and interconnecting (FETI)-like algorithm, incorporating Robin transmission conditions and an edge corner penalty term, are discussed in detail. We address in full the formulations, and subsequently, their applications to problems with significant amounts of repetitions. The non-conformal DDM approach has also been extended into surface integral equation methods. We elucidate a non-conformal integral equation domain decomposition method and a generalized combined field integral equation method for modelling EM wave scattering from non-penetrable and penetrable targets, respectively. Moreover, a plane wave scattering from a composite mockup fighter jet has been simulated using the newly developed multi-solver domain decomposition method. PMID:22870061

  19. Phase-context decomposition of diagonal unitaries for higher-dimensional systems

    NASA Astrophysics Data System (ADS)

    Beer, Kerstin; Dziemba, Friederike Anna

    2016-05-01

    We generalize the efficient decomposition method for phase-sparse diagonal operators of J. Welch et al. [Quantum Info. Comput. 16, 87 (2016)] to qudit systems. The phase-context-aware method focuses on cascaded entanglers, whose decomposition into multicontrolled inc gates can be optimized by the choice of a proper signed base-d representation for the natural numbers. While the gate count of the best-known decomposition method for general diagonal operators on qubit systems scales with O (2n) , the circuits synthesized by the Welch algorithm for diagonal operators with k distinct phases are upper-bounded by O (n2k ) , which is generalized to O (d n2k ) for the qudit case in this paper.

  20. Layout decomposition of self-aligned double patterning for 2D random logic patterning

    NASA Astrophysics Data System (ADS)

    Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.

    2011-04-01

    Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.

  1. Non-conformal domain decomposition methods for time-harmonic Maxwell equations.

    PubMed

    Shao, Yang; Peng, Zhen; Lim, Kheng Hwee; Lee, Jin-Fa

    2012-09-08

    We review non-conformal domain decomposition methods (DDMs) and their applications in solving electrically large and multi-scale electromagnetic (EM) radiation and scattering problems. In particular, a finite-element DDM, together with a finite-element tearing and interconnecting (FETI)-like algorithm, incorporating Robin transmission conditions and an edge corner penalty term, are discussed in detail. We address in full the formulations, and subsequently, their applications to problems with significant amounts of repetitions. The non-conformal DDM approach has also been extended into surface integral equation methods. We elucidate a non-conformal integral equation domain decomposition method and a generalized combined field integral equation method for modelling EM wave scattering from non-penetrable and penetrable targets, respectively. Moreover, a plane wave scattering from a composite mockup fighter jet has been simulated using the newly developed multi-solver domain decomposition method.

  2. Fixed-point adiabatic quantum search

    NASA Astrophysics Data System (ADS)

    Dalzell, Alexander M.; Yoder, Theodore J.; Chuang, Isaac L.

    2017-01-01

    Fixed-point quantum search algorithms succeed at finding one of M target items among N total items even when the run time of the algorithm is longer than necessary. While the famous Grover's algorithm can search quadratically faster than a classical computer, it lacks the fixed-point property—the fraction of target items must be known precisely to know when to terminate the algorithm. Recently, Yoder, Low, and Chuang [Phys. Rev. Lett. 113, 210501 (2014), 10.1103/PhysRevLett.113.210501] gave an optimal gate-model search algorithm with the fixed-point property. Previously, it had been discovered by Roland and Cerf [Phys. Rev. A 65, 042308 (2002), 10.1103/PhysRevA.65.042308] that an adiabatic quantum algorithm, operating by continuously varying a Hamiltonian, can reproduce the quadratic speedup of gate-model Grover search. We ask, can an adiabatic algorithm also reproduce the fixed-point property? We show that the answer depends on what interpolation schedule is used, so as in the gate model, there are both fixed-point and non-fixed-point versions of adiabatic search, only some of which attain the quadratic quantum speedup. Guided by geometric intuition on the Bloch sphere, we rigorously justify our claims with an explicit upper bound on the error in the adiabatic approximation. We also show that the fixed-point adiabatic search algorithm can be simulated in the gate model with neither loss of the quadratic Grover speedup nor of the fixed-point property. Finally, we discuss natural uses of fixed-point algorithms such as preparation of a relatively prime state and oblivious amplitude amplification.

  3. Data decomposition of Monte Carlo particle transport simulations via tally servers

    SciTech Connect

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord

    2013-11-01

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

  4. Hand vein recognition based on the connection lines of reference point and feature point

    NASA Astrophysics Data System (ADS)

    Yun-peng, Hu; Zhi-yong, Wang; Xiao-ping, Yang; Yu-ming, Xue

    2014-01-01

    According to the essential characters of the image topology, a new hand vein recognition algorithm based on the connection lines of reference point and feature points is proposed. In this method, the intersection points and the endpoints of the vein image are used as feature points. After the intersection points and the endpoints selected as feature points, the reference point for image matching are extracted from these points. The relative distances between the reference point and the feature points and the angles between the adjacent connections of the reference point and feature points are calculated and used as recognition features. Finally these two features are combined for hand vein recognition. This method can effectively overcome the influence on the recognition results caused by image translation and rotation. Experimental results show that the proposed algorithm is able to achieve hand vein recognition reliably and quickly.

  5. On a Decomposition Model for Optical Flow

    NASA Astrophysics Data System (ADS)

    Abhau, Jochen; Belhachmi, Zakaria; Scherzer, Otmar

    In this paper we present a variational method for determining cartoon and texture components of the optical flow of a noisy image sequence. The method is realized by reformulating the optical flow problem first as a variational denoising problem for multi-channel data and then by applying decomposition methods. Thanks to the general formulation, several norms can be used for the decomposition. We study a decomposition for the optical flow into bounded variation and oscillating component in greater detail. Numerical examples demonstrate the capabilities of the proposed approach.

  6. Hamiltonian decomposition for bulk and surface states.

    PubMed

    Sasaki, Ken-Ichi; Shimomura, Yuji; Takane, Yositake; Wakabayashi, Katsunori

    2009-04-10

    We demonstrate that a tight-binding Hamiltonian with nearest- and next-nearest-neighbor hopping integrals can be decomposed into bulk and boundary parts for honeycomb lattice systems. The Hamiltonian decomposition reveals that next-nearest-neighbor hopping causes sizable changes in the energy spectrum of surface states even if the correction to the energy spectrum of bulk states is negligible. By applying the Hamiltonian decomposition to edge states in graphene systems, we show that the next-nearest-neighbor hopping stabilizes the edge states. The application of Hamiltonian decomposition to a general lattice system is discussed.

  7. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  8. Octree-based segmentation for terrestrial LiDAR point cloud data in industrial applications

    NASA Astrophysics Data System (ADS)

    Su, Yun-Ting; Bethel, James; Hu, Shuowen

    2016-03-01

    Automated and efficient algorithms to perform segmentation of terrestrial LiDAR data is critical for exploitation of 3D point clouds, where the ultimate goal is CAD modeling of the segmented data. In this work, a novel segmentation technique is proposed, starting with octree decomposition to recursively divide the scene into octants or voxels, followed by a novel split and merge framework that uses graph theory and a series of connectivity analyses to intelligently merge components into larger connected components. The connectivity analysis, based on a combination of proximity, orientation, and curvature connectivity criteria, is designed for the segmentation of pipes, vessels, and walls from terrestrial LiDAR data of piping systems at industrial sites, such as oil refineries, chemical plants, and steel mills. The proposed segmentation method is exercised on two terrestrial LiDAR datasets of a steel mill and a chemical plant, demonstrating its ability to correctly reassemble and segregate features of interest.

  9. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    whether LASIS and LAMIS image data, the traditional MCA algorithm can separate the interference stripes signals and background signals very well, and make the interference hyperspectral image decomposition perfectly, and the improved MCA algorithm not only keep the perfect results of the traditional MCA algorithm, but also can reduce the times of iteration and meet the iterative convergence conditions much faster than the traditional MCA algorithm, which will also provide a very good solution for the new theory of compressive sensing.

  10. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  11. Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation

    SciTech Connect

    Smolinski, B.

    1999-09-03

    The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught in a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.

  12. QRS detection by lifting scheme constructing multi-resolution morphological decomposition.

    PubMed

    Zhang, Pu; Ma, Heather T; Zhang, Qinyu

    2014-01-01

    QRS complex detecting algorithm is core of ECG auto-diagnosis method and deeply influences cardiac cycle division for signal compression. However, ECG signals collected by noninvasive surface electrodes areusually mixed with several kinds of interference, and its waveform variation is the main reason for the hard realization of ECG processing. This paper proposes a QRS complex detecting algorithm based on multi-resolution mathematical morphological decomposition. This algorithm possesses superiorities in R peak detection of both mathematical morphological method and multi-resolution decomposition. Moreover, a lifting constructing method with Maximizationupdating operator is adopted to further improve the algorithm performance. And an efficient R peak search-back algorithm is employed to reduce the false positives (FP) and false negatives (FN). The proposed algorithm provides a good performance applying to MIT-BIH Arrhythmia Database, and achieves over 99% detection rate, sensitivity and positive predictivity, respectively, and calculation burden is low. Therefore, the proposed method is appropriate for portable medical devices in Telemedicine system.

  13. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    SciTech Connect

    Dong, Xue; Niu, Tianye; Zhu, Lei

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical properties of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one

  14. Unimolecular decomposition of methyltrichlorosilane: RRKM calculations

    SciTech Connect

    Osterheld, T.H.; Allendorf, M.D.; Melius, C.F.

    1993-06-01

    Based on reaction thermochemistry and estimates of Arrhenius A-factors, it is expected that Si-C bond cleavage, C-H bond cleavage, and HCl elimination will be the primary channels for the unimolecular decomposition of methyltrichlorosilane. Using RRKM theory, we calculated rate constants for these three reactions. The calculations support the conclusion that these three reactions are the major decomposition pathways. Rate constants for each reaction were calculated in the high-pressure limit (800--1500 K) and in the falloff regime (1300--1500 K) for bath gases of both helium and hydrogen. These calculations thus provide branching fractions as well as decomposition rates. We also calculated bimolecular rate constants for the overall decomposition in the low-pressure limit. Interesting and surprising kinetic behavior of this system and the individual reactions is discussed. The reactivity of this chlorinated organosilane is compared to that of other organosilanes.

  15. A Decomposition Theorem for Finite Automata.

    ERIC Educational Resources Information Center

    Santa Coloma, Teresa L.; Tucci, Ralph P.

    1990-01-01

    Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)

  16. Thermal Decomposition of Poly(methylphenylsilane)

    NASA Astrophysics Data System (ADS)

    Pan, Lujun; Zhang, Mei; Nakayama, Yoshikazu

    2000-03-01

    The thermal decomposition of poly(methylphenylsilane) was performed at constant heating rates and isothermal conditions. The evolved gases were studied by ionization-threshold mass spectroscopy. Pyrolysis under isothermal conditions reveals that the decomposition of poly(methylphenylsilane) is a type of depolymerization that has a first-order reaction. Kinetic analysis of the evolution spectra of CH3-Si-C6H5 radicals, phenyl and methyl substituents reveals the mechanism and activation energies of the decomposition reactions in main chains and substituents. It is found that the decomposition of main chains is a dominant reaction and results in the weight loss of approximately 90%. The effusion of phenyl and methyl substituents occurs in the two processes of rearrangement of main chains and the formation of stable Si-C containing residuals.

  17. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  18. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  19. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  20. High temperature decomposition of hydrogen peroxide

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2005-01-01

    Nitric oxide (NO) is oxidized into nitrogen dioxide (NO2) by the high temperature decomposition of a hydrogen peroxide solution to produce the oxidative free radicals, hydroxyl and hydroperoxyl. The hydrogen peroxide solution is impinged upon a heated surface in a stream of nitric oxide where it decomposes to produce the oxidative free radicals. Because the decomposition of the hydrogen peroxide solution occurs within the stream of the nitric oxide, rapid gas-phase oxidation of nitric oxide into nitrogen dioxide occurs.