Science.gov

Sample records for point decomposition algorithm

  1. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  2. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  3. Finding corner point correspondence from wavelet decomposition of image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; LeMoigne, Jacqueline

    1997-01-01

    A time efficient algorithm for image registration between two images that differ in translation is discussed. The algorithm is based on coarse-fine strategy using wavelet decomposition of both the images. The wavelet decomposition serves two different purposes: (1) its high frequency components are used to detect feature points (corner points here) and (2) it provides coarse-to-fine structure for making the algorithm time efficient. The algorithm is based on detecting the corner points from one of the images called reference image and computing corresponding points from the other image called test image by using local correlations using 7x7 windows centered around the corner points. The corresponding points are detected at the lowest decomposition level in a search area of about 11x11 (depending on the translation) and potential points of correspondence are projected onto higher levels. In the subsequent levels the local correlations are computed in a search area of no more than 3x3 for refinement of the correspondence.

  4. Algorithms for sparse nonnegative Tucker decompositions.

    PubMed

    Mørup, Morten; Hansen, Lars Kai; Arnfred, Sidse M

    2008-08-01

    There is a increasing interest in analysis of large-scale multiway data. The concept of multiway data refers to arrays of data with more than two dimensions, that is, taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Nonnegative matrix factorization (NMF), in conjunction with sparse coding, has recently been given much attention due to its part-based and easy interpretable representation. While NMF has been extended to the PARAFAC model, no such attempt has been done to extend NMF to the Tucker model. However, if the tensor data analyzed are nonnegative, it may well be relevant to consider purely additive (i.e., nonnegative) Tucker decompositions). To reduce ambiguities of this type of decomposition, we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse nonnegative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms for Tucker decompositions when the data and interactions can be considered nonnegative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is more appropriate for the data as well as to select the number of components by turning off excess components. The algorithms for SN-TUCKER can be downloaded from Mørup (2007).

  5. Domain decomposition algorithms and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined.

  6. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  7. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  8. Efficient Nonnegative Tucker Decompositions: Algorithms and Uniqueness.

    PubMed

    Zhou, Guoxu; Cichocki, Andrzej; Zhao, Qibin; Xie, Shengli

    2015-12-01

    Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction of nonnegative parts-based and physically meaningful latent components from high-dimensional tensor data while preserving the natural multilinear structure of data. However, as the data tensor often has multiple modes and is large scale, the existing NTD algorithms suffer from a very high computational complexity in terms of both storage and computation time, which has been one major obstacle for practical applications of NTD. To overcome these disadvantages, we show how low (multilinear) rank approximation (LRA) of tensors is able to significantly simplify the computation of the gradients of the cost function, upon which a family of efficient first-order NTD algorithms are developed. Besides dramatically reducing the storage complexity and running time, the new algorithms are quite flexible and robust to noise, because any well-established LRA approaches can be applied. We also show how nonnegativity incorporating sparsity substantially improves the uniqueness property and partially alleviates the curse of dimensionality of the Tucker decompositions. Simulation results on synthetic and real-world data justify the validity and high efficiency of the proposed NTD algorithms.

  9. A convergent hybrid decomposition algorithm model for SVM training.

    PubMed

    Lucidi, Stefano; Palagi, Laura; Risi, Arnaldo; Sciandrone, Marco

    2009-06-01

    Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach.

  10. Domain decomposition algorithms and computation fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.

  11. Nonparametric decomposition of quasi-periodic time series for change-point detection

    NASA Astrophysics Data System (ADS)

    Artemov, Alexey; Burnaev, Evgeny; Lokot, Andrey

    2015-12-01

    The paper is concerned with the sequential online change-point detection problem for a dynamical system driven by a quasiperiodic stochastic process. We propose a multicomponent time series model and an effective online decomposition algorithm to approximate the components of the models. Assuming the stationarity of the obtained components, we approach the change-point detection problem on a per-component basis and propose two online change-point detection schemes corresponding to two real-world scenarios. Experimental results for decomposition and detection algorithms for synthesized and real-world datasets are provided to demonstrate the efficiency of our change-point detection framework.

  12. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  13. The Empirical Mode Decomposition algorithm via Fast Fourier Transform

    NASA Astrophysics Data System (ADS)

    Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Artemyev, Dmitry N.; Khramov, Alexander G.

    2014-09-01

    In this paper we consider a problem of implementing a fast algorithm for the Empirical Mode Decomposition (EMD). EMD is one of the newest methods for decomposition of non-linear and non-stationary signals. A basis of EMD is formed "on-the-fly", i.e. it depends from a distribution of the signal and not given a priori in contrast on cases Fourier Transform (FT) or Wavelet Transform (WT). The EMD requires interpolating of local extrema sets of signal to find upper and lower envelopes. The data interpolation on an irregular lattice is a very low-performance procedure. A classical description of EMD by Huang suggests doing this through splines, i.e. through solving of a system of equations. Existence of a fast algorithm is the main advantage of the FT. A simple description of an algorithm in terms of Fast Fourier Transform (FFT) is a standard practice to reduce operation's count. We offer a fast implementation of EMD (FEMD) through FFT and some other cost-efficient algorithms. Basic two-stage interpolation algorithm for EMD is composed of a Upscale procedure through FFT and Downscale procedure through a selection procedure for signal's points. First we consider the local maxima (or minima) set without reference to the axis OX, i.e. on a regular lattice. The Upscale through the FFT change the signal's length to the Least Common Multiple (LCM) value of all distances between neighboring extremes on the axis OX. If the LCM value is too large then it is necessary to limit local set of extrema. In this case it is an analog of the spline interpolation. A demo for FEMD in noise reduction task for OCT has been shown.

  14. Avoiding spurious submovement decompositions : a globally optimal algorithm.

    SciTech Connect

    Rohrer, Brandon Robinson; Hogan, Neville

    2003-07-01

    Evidence for the existence of discrete submovements underlying continuous human movement has motivated many attempts to extract them. Although they produce visually convincing results, all of the methodologies that have been employed are prone to produce spurious decompositions. Examples of potential failures are given. A branch-and-bound algorithm for submovement extraction, capable of global nonlinear minimization (and hence capable of avoiding spurious decompositions), is developed and demonstrated.

  15. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGES

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; Wu, Kun -Lung; Catalyurek, Umit V.

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  16. Efficient variants of the vertex space domain decomposition algorithm

    SciTech Connect

    Chan, T.F.; Shao, J.P. . Dept. of Mathematics); Mathew, T.P. . Dept. of Mathematics)

    1994-11-01

    Several variants of the vertex space algorithm of Smith for two-dimensional elliptic problems are described. The vertex space algorithm is a domain decomposition method based on nonoverlapping subregions, in which the reduced Schur complement system on the interface is solved using a generalized block Jacobi-type preconditioner, with the blocks corresponding to the vertex space, edges, and a coarse grid. Two kinds of approximations are considered for the edge and vertex space subblocks, one based on Fourier approximation, and another based on an algebraic probing technique in which sparse approximations to these subblocks are computed. The motivation is to improve the efficiency of the algorithm without sacrificing the optimal convergence rate. Numerical and theoretical results on the performance of these algorithms, including variants of an algorithm of Bramble, Pasciak, and Schatz are presented.

  17. Effective decomposition algorithm for self-aligned double patterning lithography

    NASA Astrophysics Data System (ADS)

    Zhang, Hongbo; Du, Yuelin; Wong, Martin D. F.; Topaloglu, Rasit; Conley, Will

    2011-04-01

    Self-aligned double patterning (SADP) lithography is a novel lithography technology that has the intrinsic capability to reduce the overlay in the double patterning lithography (DPL). Although SADP is the critical technology to solve the lithography difficulties in sub-32nm 2D design, the questions - how to decompose a layout with reasonable overlay and how to perform a decomposability check - are still two open problems with no published work. In this paper, by formulating the problem into a SAT formation, we can answer the above two questions optimally. This is the first published paper with detailed algorithm to perform the SADP decomposition. In a layout, we can efficiently check whether a layout is decomposable. For a decomposable layout, our algorithm guarantees to find a decomposition solution with reasonable overlay reduction requirement. With little changes on the clauses in the SAT formula, we can address the decomposition problem for both the positive tone process and the negative tone process. Experimental results validate our method, and decomposition results for Nangate Open Cell Library and larger test cases are also provided with competitive run times.

  18. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    NASA Astrophysics Data System (ADS)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  19. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  20. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  1. Implementation and performance of a domain decomposition algorithm in Sisal

    SciTech Connect

    DeBoni, T.; Feo, J.; Rodrigue, G.; Muller, J.

    1993-09-23

    Sisal is a general-purpose functional language that hides the complexity of parallel processing, expedites parallel program development, and guarantees determinacy. Parallelism and management of concurrent tasks are realized automatically by the compiler and runtime system. Spatial domain decomposition is a widely-used method that focuses computational resources on the most active, or important, areas of a domain. Many complex programming issues are introduced in paralleling this method including: dynamic spatial refinement, dynamic grid partitioning and fusion, task distribution, data distribution, and load balancing. In this paper, we describe a spatial domain decomposition algorithm programmed in Sisal. We explain the compilation process, and present the execution performance of the resultant code on two different multiprocessor systems: a multiprocessor vector supercomputer, and cache-coherent scalar multiprocessor.

  2. Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm

    SciTech Connect

    Yao, Y

    2008-02-08

    Semantic graphs have become key components in analyzing complex systems such as the Internet, or biological and social networks. These types of graphs generally consist of sparsely connected clusters or 'communities' whose nodes are more densely connected to each other than to other nodes in the graph. The identification of these communities is invaluable in facilitating the visualization, understanding, and analysis of large graphs by producing subgraphs of related data whose interrelationships can be readily characterized. Unfortunately, the ability of LLNL to effectively analyze the terabytes of multisource data at its disposal has remained elusive, since existing decomposition algorithms become computationally prohibitive for graphs of this size. We have addressed this limitation by developing more efficient algorithms for discerning community structure that can effectively process massive graphs. Current algorithms for detecting community structure, such as the high quality algorithm developed by Girvan and Newman [1], are only capable of processing relatively small graphs. The cubic complexity of Girvan and Newman, for example, makes it impractical for graphs with more than approximately 10{sup 4} nodes. Our goal for this project was to develop methodologies and corresponding algorithms capable of effectively processing graphs with up to 10{sup 9} nodes. From a practical standpoint, we expect the developed scalable algorithms to help resolve a variety of operational issues associated with the productive use of semantic graphs at LLNL. During FY07, we completed a graph clustering implementation that leverages a dynamic graph transformation to more efficiently decompose large graphs. In essence, our approach dynamically transforms the graph (or subgraphs) into a tree structure consisting of biconnected components interconnected by bridge links. This isomorphism allows us to compute edge betweenness, the chief source of inefficiency in Girvan and Newman

  3. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Weerapurage, Dinesh P; Sullivan, Blair D; Groer, Christopher S

    2013-01-01

    Although many NP-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of required dynamic programming tables and excessive running times of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree-decomposition based approach to solve maximum weighted independent set. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  4. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  5. On the equivalence of a class of inverse decomposition algorithms for solving systems of linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A class of direct inverse decomposition algorithms for solving systems of linear equations is presented. Their behavior in the presence of round-off errors is analyzed. It is shown that under some mild restrictions on their implementation, the class of direct inverse decomposition algorithms presented are equivalent in terms of the error complexity measures.

  6. Singular value decomposition utilizing parallel algorithms on graphical processors

    SciTech Connect

    Kotas, Charlotte W; Barhen, Jacob

    2011-01-01

    One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements for a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder

  7. A fast iterated conditional modes algorithm for water-fat decomposition in MRI.

    PubMed

    Huang, Fangping; Narayan, Sreenath; Wilson, David; Johnson, David; Zhang, Guo-Qiang

    2011-08-01

    Decomposition of water and fat in magnetic resonance imaging (MRI) is important for biomedical research and clinical applications. In this paper, we propose a two-phased approach for the three-point water-fat decomposition problem. Our contribution consists of two components: 1) a background-masked Markov random field (MRF) energy model to formulate the local smoothness of field inhomogeneity; 2) a new iterated conditional modes (ICM) algorithm accounting for high-performance optimization of the MRF energy model. The MRF energy model is integrated with background masking to prevent error propagation of background estimates as well as improve efficiency. The central component of our new ICM algorithm is the stability tracking (ST) mechanism intended to dynamically track iterative stability on pixels so that computation per iteration is performed only on instable pixels. The ST mechanism significantly improves the efficiency of ICM. We also develop a median-based initialization algorithm to provide good initial guesses for ICM iterations, and an adaptive gradient-based scheme for parametric configuration of the MRF model. We evaluate the robust of our approach with high-resolution mouse datasets acquired from 7T MRI. PMID:21402510

  8. Chaotic Visual Cryptosystem Using Empirical Mode Decomposition Algorithm for Clinical EEG Signals.

    PubMed

    Lin, Chin-Feng

    2016-03-01

    This paper, proposes a chaotic visual cryptosystem using an empirical mode decomposition (EMD) algorithm for clinical electroencephalography (EEG) signals. The basic design concept is to integrate two-dimensional (2D) chaos-based encryption scramblers, the EMD algorithm, and a 2D block interleaver method to achieve a robust and unpredictable visual encryption mechanism. Energy-intrinsic mode function (IMF) distribution features of the clinical EEG signal are developed for chaotic encryption parameters. The maximum and second maximum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the starting points of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. The minimum and second minimum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the security level parameters of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. Three EEG database, and seventeen clinical EEG signals were tested, and the average r and mse values are 0.0201 and 4.2626 × 10(-29), respectively, for the original and chaotically-encrypted through EMD clinical EEG signals. The chaotically-encrypted signal cannot be recovered if there is an error in the input parameters, for example, an initial point error of 0.000001 %. The encryption effects of the proposed chaotic EMD visual encryption mechanism are excellent.

  9. A balanced decomposition algorithm for parallel solutions of very large sparse systems

    SciTech Connect

    Zecevic, A.I.; Siljak, D.D.

    1995-12-01

    In this paper we present an algorithm for balanced bordered block diagonal (BBD) decompositions of very large symmetric positive definite or diagonally dominant sparse matrices. The algorithm represents a generalization of the method described, and is primarily aimed at parallel solutions of very large sparse systems (> 20,000 equations). A variety of experimental results are provided to illustrate the performance of the algorithm and demonstrate its potential for computing on massively parallel architectures.

  10. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  11. Decomposition algorithms for stochastic programming on a computational grid.

    SciTech Connect

    Linderoth, J.; Wright, S.; Mathematics and Computer Science; Axioma Inc.

    2003-01-01

    We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems), and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample-average approximations of problems from the literature.

  12. Determination of the Thermal Decomposition Products of Terephthalic Acid by Using Curie-Point Pyrolyzer

    NASA Astrophysics Data System (ADS)

    Begüm Elmas Kimyonok, A.; Ulutürk, Mehmet

    2016-04-01

    The thermal decomposition behavior of terephthalic acid (TA) was investigated by thermogravimetry/differential thermal analysis (TG/DTA) and Curie-point pyrolysis. TG/DTA analysis showed that TA is sublimed at 276°C prior to decomposition. Pyrolysis studies were carried out at various temperatures ranging from 160 to 764°C. Decomposition products were analyzed and their structures were determined by gas chromatography-mass spectrometry (GC-MS). A total of 11 degradation products were identified at 764°C, whereas no peak was observed below 445°C. Benzene, benzoic acid, and 1,1‧-biphenyl were identified as the major decomposition products, and other degradation products such as toluene, benzophenone, diphenylmethane, styrene, benzaldehyde, phenol, 9H-fluorene, and 9-phenyl 9H-fluorene were also detected. A pyrolysis mechanism was proposed based on the findings.

  13. Algorithmic and complexity results for decompositions of biological networks into monotone subsystems.

    PubMed

    DasGupta, Bhaskar; Enciso, German Andres; Sontag, Eduardo; Zhang, Yi

    2007-01-01

    A useful approach to the mathematical analysis of large-scale biological networks is based upon their decompositions into monotone dynamical systems. This paper deals with two computational problems associated to finding decompositions which are optimal in an appropriate sense. In graph-theoretic language, the problems can be recast in terms of maximal sign-consistent subgraphs. The theoretical results include polynomial-time approximation algorithms as well as constant-ratio inapproximability results. One of the algorithms, which has a worst-case guarantee of 87.9% from optimality, is based on the semidefinite programming relaxation approach of Goemans-Williamson [Goemans, M., Williamson, D., 1995. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42 (6), 1115-1145]. The algorithm was implemented and tested on a Drosophila segmentation network and an Epidermal Growth Factor Receptor pathway model, and it was found to perform close to optimally.

  14. Automated decomposition algorithm for Raman spectra based on a Voigt line profile model.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2016-05-20

    Raman spectra measured by spectrometers usually suffer from band overlap and random noise. In this paper, an automated decomposition algorithm based on a Voigt line profile model for Raman spectra is proposed to solve this problem. To decompose a measured Raman spectrum, a Voigt line profile model is introduced to parameterize the measured spectrum, and a Gaussian function is used as the instrumental broadening function. Hence, the issue of spectral decomposition is transformed into a multiparameter optimization problem of the Voigt line profile model parameters. The algorithm can eliminate instrumental broadening, obtain a recovered Raman spectrum, resolve overlapping bands, and suppress random noise simultaneously. Moreover, the recovered spectrum can be decomposed to a group of Lorentzian functions. Experimental results on simulated Raman spectra show that the performance of this algorithm is much better than a commonly used blind deconvolution method. The algorithm has also been tested on the industrial Raman spectra of ortho-xylene and proved to be effective.

  15. Automated decomposition algorithm for Raman spectra based on a Voigt line profile model.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2016-05-20

    Raman spectra measured by spectrometers usually suffer from band overlap and random noise. In this paper, an automated decomposition algorithm based on a Voigt line profile model for Raman spectra is proposed to solve this problem. To decompose a measured Raman spectrum, a Voigt line profile model is introduced to parameterize the measured spectrum, and a Gaussian function is used as the instrumental broadening function. Hence, the issue of spectral decomposition is transformed into a multiparameter optimization problem of the Voigt line profile model parameters. The algorithm can eliminate instrumental broadening, obtain a recovered Raman spectrum, resolve overlapping bands, and suppress random noise simultaneously. Moreover, the recovered spectrum can be decomposed to a group of Lorentzian functions. Experimental results on simulated Raman spectra show that the performance of this algorithm is much better than a commonly used blind deconvolution method. The algorithm has also been tested on the industrial Raman spectra of ortho-xylene and proved to be effective. PMID:27411136

  16. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  17. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  18. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  19. Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm

    PubMed Central

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565

  20. Inferring gene regulatory networks by singular value decomposition and gravitation field algorithm.

    PubMed

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms.

  1. A Parallel Domain Decomposition BEM Algorithm for Three Dimensional Exponentially Graded Elasticity

    SciTech Connect

    Ortiz Tavara, Jhonny E; Shelton Jr, William Allison; Mantic, Vladislav; Criado, Rafael; Paris, Federico; Gray, Leonard J

    2008-01-01

    A parallel domain decomposition boundary integral algorithm for three-dimensional exponentially graded elasticity has been developed. As this subdomain algorithm allows the grading direction to vary in the structure, geometries arising from practical FGM applications can be handled. Moreover, the boundary integral algorithm scales well with the number of processors, also helping to alleviate the high computational cost of evaluating the Green's function. Numerical results for cylindrical geometries show excellent agreement with the new analytical solution deduced for axisymmetric plane strain states in a radially graded material.

  2. Cross-comparison of three electromyogram decomposition algorithms assessed with experimental and simulated data.

    PubMed

    Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A

    2015-01-01

    The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy.

  3. Cross-comparison of three electromyogram decomposition algorithms assessed with experimental and simulated data.

    PubMed

    Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A

    2015-01-01

    The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy. PMID:24876131

  4. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  5. Combining DC algorithms (DCAs) and decomposition techniques for the training of nonpositive-semidefinite kernels.

    PubMed

    Akoa, François Bertrand

    2008-11-01

    Today, decomposition methods are one of the most popular methods for training support vector machines (SVMs). With the use of kernels that do not satisfy Mercer's condition, new techniques must be designed to handle nonpositive-semidefinite kernels resulting to this choice. In this work we incorporate difference of convex (DC functions) optimization techniques into decomposition methods to tackle this difficulty. The new approach needs no problem modification and we show that the only use of a truncated DC algorithms (DCAs) in the decomposition scheme produces a sufficient decrease of the objective function at each iteration. Thanks to this property, an asymptotic convergence proof of the new algorithm is produced without any blockwise convexity assumption on the objective function. We also investigate a working set selection rule using second-order information for sequential minimal optimization (SMO)-type decomposition in the spirit of DC optimization. Numerical results show the robustness and the efficiency of the new methods compared with state-of-the-art software. PMID:18990641

  6. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    SciTech Connect

    Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks.

  7. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  8. Decomposition-based multiobjective evolutionary algorithm for community detection in dynamic social networks.

    PubMed

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.

  9. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  10. Optimizing the decomposition of soil moisture time-series data using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Kulkarni, C.; Mengshoel, O. J.; Basak, A.; Schmidt, K. M.

    2015-12-01

    The task of determining near-surface volumetric water content (VWC), using commonly available dielectric sensors (based upon capacitance or frequency domain technology), is made challenging due to the presence of "noise" such as temperature-driven diurnal variations in the recorded data. We analyzed a post-wildfire rainfall and runoff monitoring dataset for hazard studies in Southern California. VWC was measured with EC-5 sensors manufactured by Decagon Devices. Many traditional signal smoothing techniques such as moving averages, splines, and Loess smoothing exist. Unfortunately, when applied to our post-wildfire dataset, these techniques diminish maxima, introduce time shifts, and diminish signal details. A promising seasonal trend-decomposition procedure based on Loess (STL) decomposes VWC time series into trend, seasonality, and remainder components. Unfortunately, STL with its default parameters produces similar results as previously mentioned smoothing methods. We propose a novel method to optimize seasonal decomposition using STL with genetic algorithms. This method successfully reduces "noise" including diurnal variations while preserving maxima, minima, and signal detail. Better decomposition results for the post-wildfire VWC dataset were achieved by optimizing STL's control parameters using genetic algorithms. The genetic algorithms minimize an additive objective function with three weighted terms: (i) root mean squared error (RMSE) of straight line relative to STL trend line; (ii) range of STL remainder; and (iii) variance of STL remainder. Our optimized STL method, combining trend and remainder, provides an improved representation of signal details by preserving maxima and minima as compared to the traditional smoothing techniques for the post-wildfire rainfall and runoff monitoring data. This method identifies short- and long-term VWC seasonality and provides trend and remainder data suitable for forecasting VWC in response to precipitation.

  11. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space

  12. Parrallel Implementation of Fast Randomized Algorithms for Low Rank Matrix Decomposition

    SciTech Connect

    Lucas, Andrew J.; Stalizer, Mark; Feo, John T.

    2014-03-01

    We analyze the parallel performance of randomized interpolative decomposition by de- composing low rank complex-valued Gaussian random matrices larger than 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested.

  13. Communication: Active space decomposition with multiple sites: Density matrix renormalization group algorithm

    SciTech Connect

    Parker, Shane M.; Shiozaki, Toru

    2014-12-07

    We extend the active space decomposition method, recently developed by us, to more than two active sites using the density matrix renormalization group algorithm. The fragment wave functions are described by complete or restricted active-space wave functions. Numerical results are shown on a benzene pentamer and a perylene diimide trimer. It is found that the truncation errors in our method decrease almost exponentially with respect to the number of renormalization states M, allowing for numerically exact calculations (to a few μE{sub h} or less) with M = 128 in both cases. This rapid convergence is because the renormalization steps are used only for the interfragment electron correlation.

  14. Automated polysomnogram artifact compensation using the generalized singular value decomposition algorithm.

    PubMed

    Fairley, Jacqueline; Johnson, Ashley N; Georgoulas, George; Vachtsevanos, George

    2010-01-01

    Manual/visual polysomnogram (psg) analysis is a standard and commonly implemented procedure utilized in the diagnosis and treatment of sleep related human pathologies. Current technological trends in psg analysis focus upon translating manual psg analysis into automated/computerized approaches. A necessary first step in establishing efficient automated human sleep analysis systems is the development of reliable pre-processing tools to discriminate between outlier/artifact instances and data of interest. This paper investigates the application of an automated approach, using the generalized singular value decomposition algorithm, to compensate for specific psg artifacts.

  15. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  16. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  17. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    PubMed

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  18. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  19. An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts.

    PubMed

    Jiang, Shouyong; Yang, Shengxiang

    2016-02-01

    The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.

  20. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems

  1. Parallel two-level domain decomposition based Jacobi-Davidson algorithms for pyramidal quantum dot simulation

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan

    2016-07-01

    We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.

  2. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  3. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  4. Efficient detection and recognition algorithm of reference points in photogrammetry

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Liu, Gang; Zhu, Lichun; Li, Xiaofeng; Zhang, Yuhai; Shan, Siyu

    2016-04-01

    In photogrammetry, an approach of automatic detection and recognition on reference points have been proposed to meet the requirements on detection and matching of reference points. The reference points used here are the CCT(circular coded target), which compose of two parts: the round target point in central region and the circular encoding band in surrounding region. Firstly, the contours of image are extracted, after that noises and disturbances of the image are filtered out by means of a series of criteria, such as the area of the contours, the correlation coefficient between two regions of contours etc. Secondly, the cubic spline interpolation is adopted to process the central contour region of the CCT. The contours of the interpolated image are extracted again, then the least square ellipse fitting is performed to calculate the center coordinates of the CCT. Finally, the encoded value is obtained by the angle information from the circular encoding band of the CCT. From the experiment results, the location precision of the CCT can be achieved to sub-pixel level of the algorithm presented. Meanwhile the recognition accuracy is pretty high, even if the background of the image is complex and full of disturbances. In addition, the property of the algorithm is robust. Furthermore, the runtime of the algorithm is fast.

  5. Algorithm for astronomical, point source, signal to noise ratio calculations

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Schroeder, D. J.

    1984-01-01

    An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.

  6. Greedy reconstruction algorithm for fluorescence molecular tomography by means of truncated singular value decomposition conversion.

    PubMed

    Shi, Junwei; Cao, Xu; Liu, Fei; Zhang, Bin; Luo, Jianwen; Bai, Jing

    2013-03-01

    Fluorescence molecular tomography (FMT) is a promising imaging modality that enables three-dimensional visualization of fluorescent targets in vivo in small animals. L2-norm regularization methods are usually used for severely ill-posed FMT problems. However, the smoothing effects caused by these methods result in continuous distribution that lacks high-frequency edge-type features and hence limits the resolution of FMT. In this paper, the sparsity in FMT reconstruction results is exploited via compressed sensing (CS). First, in order to ensure the feasibility of CS for the FMT inverse problem, truncated singular value decomposition (TSVD) conversion is implemented for the measurement matrix of the FMT problem. Then, as one kind of greedy algorithm, an ameliorated stagewise orthogonal matching pursuit with gradually shrunk thresholds and a specific halting condition is developed for the FMT inverse problem. To evaluate the proposed algorithm, we compared it with a TSVD method based on L2-norm regularization in numerical simulation and phantom experiments. The results show that the proposed algorithm can obtain higher spatial resolution and higher signal-to-noise ratio compared with the TSVD method.

  7. A propagating mode extraction algorithm for microwave waveguide using variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Yin, Aijun; Ren, Hongji

    2015-09-01

    One microwave propagating mode extraction algorithm is proposed for microwave waveguide using variational mode decomposition (VMD). The reflected signal acquired by the waveguide can be seen as the mixture of the propagating mode and evanescent modes. The propagating mode contains information regarding defects and evanescent modes can be treated as noise. By using VMD, the propagating mode can be extracted. Currently, decomposition models are mostly limited by lacking mathematical theory, backward error correction not being allowed in most methods due to the recursive sifting, or the inability to properly cope with noise. In VMD, the bands have been determined adaptively and the corresponding modes are estimated concurrently. An ensemble of modes are derived, and these modes collectively reproduce the input signal while each is being smoothed after demodulation into the baseband. This proposed model is particularly robust to sampling and noise. The bridge between the physical and mathematical models is demonstrated. A coated steel defect detection experiment is conducted using an X-band open-ended rectangular waveguide to evaluate the efficacy of the VMD method. Two samples are demonstrated. The steel with hole sample has a regular and clear defect, whereas the defect of steel with peening is fuzzy. For both samples, the VMD results can accurately identify the defects.

  8. Non-equilibrium molecular dynamics simulation of nanojet injection with adaptive-spatial decomposition parallel algorithm.

    PubMed

    Shin, Hyun-Ho; Yoon, Woong-Sup

    2008-07-01

    An Adaptive-Spatial Decomposition parallel algorithm was developed to increase computation efficiency for molecular dynamics simulations of nano-fluids. Injection of a liquid argon jet with a scale of 17.6 molecular diameters was investigated. A solid annular platinum injector was also solved simultaneously with the liquid injectant by adopting a solid modeling technique which incorporates phantom atoms. The viscous heat was naturally discharged through the solids so the liquid boiling problem was avoided with no separate use of temperature controlling methods. Parametric investigations of injection speed, wall temperature, and injector length were made. A sudden pressure drop at the orifice exit causes flash boiling of the liquid departing the nozzle exit with strong evaporation on the surface of the liquids, while rendering a slender jet. The elevation of the injection speed and the wall temperature causes an activation of the surface evaporation concurrent with reduction in the jet breakup length and the drop size.

  9. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  10. From Point Clouds to Architectural Models: Algorithms for Shape Reconstruction

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Falcolini, C.; Saccone, M.; Spadafora, G.

    2013-02-01

    The use of terrestrial laser scanners in architectural survey applications has become more and more common. Row data complexity, as given by scanner restitution, leads to several problems about design and 3D-modelling starting from Point Clouds. In this context we present a study on architectural sections and mathematical algorithms for their shape reconstruction, according to known or definite geometrical rules, focusing on shapes of different complexity. Each step of the semi-automatic algorithm has been developed using Mathematica software and CAD, integrating both programs in order to reconstruct a geometrical CAD model of the object. Our study is motivated by the fact that, for architectural survey, most of three dimensional modelling procedures concerning point clouds produce superabundant, but often unnecessary, information and are also very expensive in terms of cpu time using more and more sophisticated hardware and software. On the contrary, it's important to simplify/decimate the point cloud in order to recognize a particular form out of some definite geometric/architectonic shapes. Such a process consists of several steps: first the definition of plane sections and characterization of their architecture; secondly the construction of a continuous plane curve depending on some parameters. In the third step we allow the selection on the curve of some nodal points with given specific characteristics (symmetry, tangency conditions, shadowing exclusion, corners, … ). The fourth and last step is the construction of a best shape defined by the comparison with an abacus of known geometrical elements, such as moulding profiles, leading to a precise architectonical section. The algorithms have been developed and tested in very different situations and are presented in a case study of complex geometries such as some mouldings profiles in the Church of San Carlo alle Quattro Fontane.

  11. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  12. Point of Care and Factor Concentrate-Based Coagulation Algorithms

    PubMed Central

    Theusinger, Oliver M.; Stein, Philipp; Levy, Jerrold H.

    2015-01-01

    In the last years it has become evident that the use of blood products should be reduced whenever possible. There is increasing evidence regarding serious adverse events, including higher mortality and morbidity, related to transfusions. The use of point of care (POC) devices integrated in algorithms is one of the important mechanisms to limit blood product exposure. Any type of algorithm, especially the POC-based ones, allows goal-directed transfusions of blood products and even better targeted factor concentrate substitutions. Different types of algorithms in different surgical settings (cardiac surgery, trauma, liver surgery etc.) have been established with growing interest in their use as they offer objective therapy for management and reduction of blood product use. The use of POC devices with evidence-based algorithms is important in the bleeding patient independent of its origin (traumatic vs. surgical). The use of factor concentrates compared to the classical blood products can be cost-saving, beneficial for the patient, and in agreement with the WHO-requested standard of care. The empiric and uncontrolled use of blood products such as fresh frozen plasma, red blood cells, and platelets without POC monitoring should no longer be followed with regard to actual evidence in literature. Furthermore, the use of factor concentrates may provide better outcomes and potential for cost saving. PMID:26019707

  13. Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2004-01-01

    This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.

  14. New Advances in the Study of the Proximal Point Algorithm

    NASA Astrophysics Data System (ADS)

    Moroşanu, Gheorghe

    2010-09-01

    Consider in a real Hilbert space H the inexact, Halpern-type, proximal point algorithm xn+1 = αnu+(1-αn)Jβnxn+en, n = 0,1,…, (H—PPA) where u, x∈H are given points, Jβn = (I+βna) for a given maximal monotone operator A, and (en) is the error sequence, under new assumptions on αn∈(0,1) and βn∈(0,1). Several strong convergence results for the H—PPA are presented under the general condition that the error sequence converges strongly to zero, thus improving the classical Rockafellar's summability condition on (‖en‖) that has been extensively used so far for different versions of the proximal point algorithm. Our results extend and improve some recent ones. These results can be applied to approximate minimizers of convex functionals. Convergence rate estimates are established for a sequence approximating the minimum value of such a functional.

  15. Investigating properties of the cardiovascular system using innovative analysis algorithms based on ensemble empirical mode decomposition.

    PubMed

    Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2012-01-01

    Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.

  16. Parallel algorithm for computing points on a computation front hyperplane

    NASA Astrophysics Data System (ADS)

    Krasnov, M. M.

    2015-01-01

    A parallel algorithm for computing points on a computation front hyperplane is described. This task arises in the computation of a quantity defined on a multidimensional rectangular domain. Three-dimensional domains are usually discussed, but the material is given in the general form when the number of measurements is at least two. When the values of a quantity at different points are internally independent (which is frequently the case), the corresponding computations are independent as well and can be performed in parallel. However, if there are internal dependences (as, for example, in the Gauss-Seidel method for systems of linear equations), then the order of scanning points of the domain is an important issue. A conventional approach in this case is to form a computation front hyperplane (a usual plane in the three-dimensional case and a line in the two-dimensional case) that moves linearly across the domain at a certain angle. At every step in the course of motion of this hyperplane, its intersection points with the domain can be treated independently and, hence, in parallel, but the steps themselves are executed sequentially. At different steps, the intersection of the hyperplane with the entire domain can have a rather complex geometry and the search for all points of the domain lying on the hyperplane at a given step is a nontrivial problem. This problem (i.e., the computation of the coordinates of points lying in the intersection of the domain with the hyperplane at a given step in the course of hyperplane motion) is addressed below. The computations over the points of the hyperplane can be executed in parallel.

  17. A Three-level BDDC algorithm for saddle point problems

    SciTech Connect

    Tu, X.

    2008-12-10

    BDDC algorithms have previously been extended to the saddle point problems arising from mixed formulations of elliptic and incompressible Stokes problems. In these two-level BDDC algorithms, all iterates are required to be in a benign space, a subspace in which the preconditioned operators are positive definite. This requirement can lead to large coarse problems, which have to be generated and factored by a direct solver at the beginning of the computation and they can ultimately become a bottleneck. An additional level is introduced in this paper to solve the coarse problem approximately and to remove this difficulty. This three-level BDDC algorithm keeps all iterates in the benign space and the conjugate gradient methods can therefore be used to accelerate the convergence. This work is an extension of the three-level BDDC methods for standard finite element discretization of elliptic problems and the same rate of convergence is obtained for the mixed formulation of the same problems. Estimate of the condition number for this three-level BDDC methods is provided and numerical experiments are discussed.

  18. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    PubMed

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. PMID:26953177

  19. A maximum power point tracking algorithm for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  20. Parallel data-driven decomposition algorithm for large-scale datasets: with application to transitional boundary layers

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Schmid, Peter J.

    2016-10-01

    Many fluid flows of engineering interest, though very complex in appearance, can be approximated by low-order models governed by a few modes, able to capture the dominant behavior (dynamics) of the system. This feature has fueled the development of various methodologies aimed at extracting dominant coherent structures from the flow. Some of the more general techniques are based on data-driven decompositions, most of which rely on performing a singular value decomposition (SVD) on a formulated snapshot (data) matrix. The amount of experimentally or numerically generated data expands as more detailed experimental measurements and increased computational resources become readily available. Consequently, the data matrix to be processed will consist of far more rows than columns, resulting in a so-called tall-and-skinny (TS) matrix. Ultimately, the SVD of such a TS data matrix can no longer be performed on a single processor, and parallel algorithms are necessary. The present study employs the parallel TSQR algorithm of (Demmel et al. in SIAM J Sci Comput 34(1):206-239, 2012), which is further used as a basis of the underlying parallel SVD. This algorithm is shown to scale well on machines with a large number of processors and, therefore, allows the decomposition of very large datasets. In addition, the simplicity of its implementation and the minimum required communication makes it suitable for integration in existing numerical solvers and data decomposition techniques. Examples that demonstrate the capabilities of highly parallel data decomposition algorithms include transitional processes in compressible boundary layers without and with induced flow separation.

  1. Parallel data-driven decomposition algorithm for large-scale datasets: with application to transitional boundary layers

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Schmid, Peter J.

    2016-03-01

    Many fluid flows of engineering interest, though very complex in appearance, can be approximated by low-order models governed by a few modes, able to capture the dominant behavior (dynamics) of the system. This feature has fueled the development of various methodologies aimed at extracting dominant coherent structures from the flow. Some of the more general techniques are based on data-driven decompositions, most of which rely on performing a singular value decomposition (SVD) on a formulated snapshot (data) matrix. The amount of experimentally or numerically generated data expands as more detailed experimental measurements and increased computational resources become readily available. Consequently, the data matrix to be processed will consist of far more rows than columns, resulting in a so-called tall-and-skinny (TS) matrix. Ultimately, the SVD of such a TS data matrix can no longer be performed on a single processor, and parallel algorithms are necessary. The present study employs the parallel TSQR algorithm of (Demmel et al. in SIAM J Sci Comput 34(1):206-239, 2012), which is further used as a basis of the underlying parallel SVD. This algorithm is shown to scale well on machines with a large number of processors and, therefore, allows the decomposition of very large datasets. In addition, the simplicity of its implementation and the minimum required communication makes it suitable for integration in existing numerical solvers and data decomposition techniques. Examples that demonstrate the capabilities of highly parallel data decomposition algorithms include transitional processes in compressible boundary layers without and with induced flow separation.

  2. [Application of three-way data analysis (second-order tensor decomposition) algorithms in analysis of liquid chromatography].

    PubMed

    Zhang, Jin; Peng, Qianrong; Xu, Longquan; Yang, Min; Wu, Aijing; Ye, Shizhu

    2014-11-01

    Using dropline separation, tangent skimming, and triangulation to estimate the area of an overlapping chromatographic peak might contribute to a large deviation. It is easy, however, to eliminate these errors caused by geometric segmentation using three-way data analysis (second-order tensor decomposition) algorithms. This method of chromatographic analysis has many advantages: automation, anti-interference, high accuracy in the resolution of overlapping chrom- atographic peaks. It even makes the final goal of analytical chemistry achievable without the aid of complicated separation procedures. The core of this method is the process of utilizing useful information and building models through chemometric algorithms. Three-way chromatographic data set can be divided into trilinear dataset and nontrilinear dataset, correspondingly, three-way data analysis (second-order tensor decomposition) algorithms can be divided into trilinear algorithms and nontrilinear algorithms. In this paper, three-way calibration used in liquid chromatography for complex chemical systems in the last decade is reviewed, and focused on sample pretreatment, auxiliary algorithms, the combination and comparison of correction algorithms. PMID:25764649

  3. An Algorithm for Projecting Points onto a Patched CAD Model

    SciTech Connect

    Henshaw, W D

    2001-05-29

    We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.

  4. [Removal Algorithm of Power Line Interference in Electrocardiogram Based on Morphological Component Analysis and Ensemble Empirical Mode Decomposition].

    PubMed

    Zhao, Wei; Xiao, Shixiao; Zhang, Baocan; Huang, Xiaojing; You, Rongyi

    2015-12-01

    Electrocardiogram (ECG) signals are susceptible to be disturbed by 50 Hz power line interference (PLI) in the process of acquisition and conversion. This paper, therefore, proposes a novel PLI removal algorithm based on morphological component analysis (MCA) and ensemble empirical mode decomposition (EEMD). Firstly, according to the morphological differences in ECG waveform characteristics, the noisy ECG signal was decomposed into the mutated component, the smooth component and the residual component by MCA. Secondly, intrinsic mode functions (IMF) of PLI was filtered. The noise suppression rate (NSR) and the signal distortion ratio (SDR) were used to evaluate the effect of de-noising algorithm. Finally, the ECG signals were re-constructed. Based on the experimental comparison, it was concluded that the proposed algorithm had better filtering functions than the improved Levkov algorithm, because it could not only effectively filter the PLI, but also have smaller SDR value. PMID:27079083

  5. Validation of the pulse decomposition analysis algorithm using central arterial blood pressure

    PubMed Central

    2014-01-01

    Background There is a significant need for continuous noninvasive blood pressure (cNIBP) monitoring, especially for anesthetized surgery and ICU recovery. cNIBP systems could lower costs and expand the use of continuous blood pressure monitoring, lowering risk and improving outcomes. The test system examined here is the CareTaker® and a pulse contour analysis algorithm, Pulse Decomposition Analysis (PDA). PDA’s premise is that the peripheral arterial pressure pulse is a superposition of five individual component pressure pulses that are due to the left ventricular ejection and reflections and re-reflections from only two reflection sites within the central arteries. The hypothesis examined here is that the model’s principal parameters P2P1 and T13 can be correlated with, respectively, systolic and pulse pressures. Methods Central arterial blood pressures of patients (38 m/25 f, mean age: 62.7 y, SD: 11.5 y, mean height: 172.3 cm, SD: 9.7 cm, mean weight: 86.8 kg, SD: 20.1 kg) undergoing cardiac catheterization were monitored using central line catheters while the PDA parameters were extracted from the arterial pulse signal obtained non-invasively using CareTaker system. Results Qualitative validation of the model was achieved with the direct observation of the five component pressure pulses in the central arteries using central line catheters. Statistically significant correlations between P2P1 and systole and T13 and pulse pressure were established (systole: R square: 0.92 (p < 0.0001), diastole: R square: 0.78 (p < 0.0001). Bland-Altman comparisons between blood pressures obtained through the conversion of PDA parameters to blood pressures of non-invasively obtained pulse signatures with catheter-obtained blood pressures fell within the trend guidelines of the Association for the Advancement of Medical Instrumentation SP-10 standard (standard deviation: 8 mmHg(systole: 5.87 mmHg, diastole: 5.69 mmHg)). Conclusions The results indicate that arterial

  6. Algorithm of the automated choice of points of the acupuncture for EHF-therapy

    NASA Astrophysics Data System (ADS)

    Lyapina, E. P.; Chesnokov, I. A.; Anisimov, Ya. E.; Bushuev, N. A.; Murashov, E. P.; Eliseev, Yu. Yu.; Syuzanna, H.

    2007-05-01

    Offered algorithm of the automated choice of points of the acupuncture for EHF-therapy. The recipe formed by algorithm of an automated choice of points for acupunctural actions has a recommendational character. Clinical investigations showed that application of the developed algorithm in EHF-therapy allows to normalize energetic state of the meridians and to effectively solve many problems of an organism functioning.

  7. LIFT: a nested decomposition algorithm for solving lower block triangular linear programs. Report AMD-859. [In PL/I for IBM 370

    SciTech Connect

    Ament, D; Ho, J; Loute, E; Remmelswaal, M

    1980-06-01

    Nested decomposition of linear programs is the result of a multilevel, hierarchical application of the Dantzig-Wolfe decomposition principle. The general structure is called lower block-triangular, and permits direct accounting of long-term effects of investment, service life, etc. LIFT, an algorithm for solving lower block triangular linear programs, is based on state-of-the-art modular LP software. The algorithmic and software aspects of LIFT are outlined, and computational results are presented. 5 figures, 6 tables. (RWR)

  8. Image encryption algorithm based on wavelet packet decomposition and discrete linear canonical transform

    NASA Astrophysics Data System (ADS)

    Sharma, K. K.; Jain, Heena

    2013-01-01

    The security of digital data including images has attracted more attention recently, and many different image encryption methods have been proposed in the literature for this purpose. In this paper, a new image encryption method using wavelet packet decomposition and discrete linear canonical transform is proposed. The use of wavelet packet decomposition and DLCT increases the key size significantly making the encryption more robust. Simulation results of the proposed technique are also presented.

  9. Technical Note: MRI only prostate radiotherapy planning using the statistical decomposition algorithm

    SciTech Connect

    Siversson, Carl; Nordström, Fredrik; Nilsson, Terese; Nyholm, Tufve; Jonsson, Joakim; Gunnlaugsson, Adalsteinn; Olsson, Lars E.

    2015-10-15

    Purpose: In order to enable a magnetic resonance imaging (MRI) only workflow in radiotherapy treatment planning, methods are required for generating Hounsfield unit (HU) maps (i.e., synthetic computed tomography, sCT) for dose calculations, directly from MRI. The Statistical Decomposition Algorithm (SDA) is a method for automatically generating sCT images from a single MR image volume, based on automatic tissue classification in combination with a model trained using a multimodal template material. This study compares dose calculations between sCT generated by the SDA and conventional CT in the male pelvic region. Methods: The study comprised ten prostate cancer patients, for whom a 3D T2 weighted MRI and a conventional planning CT were acquired. For each patient, sCT images were generated from the acquired MRI using the SDA. In order to decouple the effect of variations in patient geometry between imaging modalities from the effect of uncertainties in the SDA, the conventional CT was nonrigidly registered to the MRI to assure that their geometries were well aligned. For each patient, a volumetric modulated arc therapy plan was created for the registered CT (rCT) and recalculated for both the sCT and the conventional CT. The results were evaluated using several methods, including mean average error (MAE), a set of dose-volume histogram parameters, and a restrictive gamma criterion (2% local dose/1 mm). Results: The MAE within the body contour was 36.5 ± 4.1 (1 s.d.) HU between sCT and rCT. Average mean absorbed dose difference to target was 0.0% ± 0.2% (1 s.d.) between sCT and rCT, whereas it was −0.3% ± 0.3% (1 s.d.) between CT and rCT. The average gamma pass rate was 99.9% for sCT vs rCT, whereas it was 90.3% for CT vs rCT. Conclusions: The SDA enables a highly accurate MRI only workflow in prostate radiotherapy planning. The dosimetric uncertainties originating from the SDA appear negligible and are notably lower than the uncertainties

  10. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  11. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  12. Formulation and error analysis for a generalized image point correspondence algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar

    1992-01-01

    A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.

  13. A Novel Tracking Algorithm via Feature Points Matching

    PubMed Central

    Luo, Nan; Sun, Quansen; Chen, Qiang; Ji, Zexuan; Xia, Deshen

    2015-01-01

    Visual target tracking is a primary task in many computer vision applications and has been widely studied in recent years. Among all the tracking methods, the mean shift algorithm has attracted extraordinary interest and been well developed in the past decade due to its excellent performance. However, it is still challenging for the color histogram based algorithms to deal with the complex target tracking. Therefore, the algorithms based on other distinguishing features are highly required. In this paper, we propose a novel target tracking algorithm based on mean shift theory, in which a new type of image feature is introduced and utilized to find the corresponding region between the neighbor frames. The target histogram is created by clustering the features obtained in the extraction strategy. Then, the mean shift process is adopted to calculate the target location iteratively. Experimental results demonstrate that the proposed algorithm can deal with the challenging tracking situations such as: partial occlusion, illumination change, scale variations, object rotation and complex background clutter. Meanwhile, it outperforms several state-of-the-art methods. PMID:25617769

  14. Error tolerance in an NMR implementation of Grover's fixed-point quantum search algorithm

    SciTech Connect

    Xiao Li; Jones, Jonathan A.

    2005-09-15

    We describe an implementation of Grover's fixed-point quantum search algorithm on a nuclear magnetic resonance quantum computer, searching for either one or two matching items in an unsorted database of four items. In this algorithm the target state (an equally weighted superposition of the matching states) is a fixed point of the recursive search operator, so that the algorithm always moves towards the desired state. The effects of systematic errors in the implementation are briefly explored.

  15. A Comprehensive Noise Robust Speech Parameterization Algorithm Using Wavelet Packet Decomposition-Based Denoising and Speech Feature Representation Techniques

    NASA Astrophysics Data System (ADS)

    Kotnik, Bojan; Kačič, Zdravko

    2007-12-01

    This paper concerns the problem of automatic speech recognition in noise-intense and adverse environments. The main goal of the proposed work is the definition, implementation, and evaluation of a novel noise robust speech signal parameterization algorithm. The proposed procedure is based on time-frequency speech signal representation using wavelet packet decomposition. A new modified soft thresholding algorithm based on time-frequency adaptive threshold determination was developed to efficiently reduce the level of additive noise in the input noisy speech signal. A two-stage Gaussian mixture model (GMM)-based classifier was developed to perform speech/nonspeech as well as voiced/unvoiced classification. The adaptive topology of the wavelet packet decomposition tree based on voiced/unvoiced detection was introduced to separately analyze voiced and unvoiced segments of the speech signal. The main feature vector consists of a combination of log-root compressed wavelet packet parameters, and autoregressive parameters. The final output feature vector is produced using a two-staged feature vector postprocessing procedure. In the experimental framework, the noisy speech databases Aurora 2 and Aurora 3 were applied together with corresponding standardized acoustical model training/testing procedures. The automatic speech recognition performance achieved using the proposed noise robust speech parameterization procedure was compared to the standardized mel-frequency cepstral coefficient (MFCC) feature extraction procedures ETSI ES 201 108 and ETSI ES 202 050.

  16. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  17. Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure

    PubMed Central

    Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi

    2016-01-01

    The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising. PMID:27670156

  18. An algorithm for point cluster generalization based on the Voronoi diagram

    NASA Astrophysics Data System (ADS)

    Yan, Haowen; Weibel, Robert

    2008-08-01

    This paper presents an algorithm for point cluster generalization. Four types of information, i.e. statistical, thematic, topological, and metric information are considered, and measures are selected to describe corresponding types of information quantitatively in the algorithm, i.e. the number of points for statistical information, the importance value for thematic information, the Voronoi neighbors for topological information, and the distribution range and relative local density for metric information. Based on these measures, an algorithm for point cluster generalization is developed. Firstly, point clusters are triangulated and a border polygon of the point clusters is obtained. By the border polygon, some pseudo points are added to the original point clusters to form a new point set and a range polygon that encloses all original points is constructed. Secondly, the Voronoi polygons of the new point set are computed in order to obtain the so-called relative local density of each point. Further, the selection probability of each point is computed using its relative local density and importance value, and then mark those will-be-deleted points as 'deleted' according to their selection probabilities and Voronoi neighboring relations. Thirdly, if the number of retained points does not satisfy that computed by the Radical Law, physically delete the points marked as 'deleted' forming a new point set, and the second step is repeated; else physically deleted pseudo points and the points marked as 'deleted', and the generalized point clusters are achieved. Owing to the use of the Voronoi diagram the algorithm is parameter free and fully automatic. As our experiments show, it can be used in the generalization of point features arranged in clusters such as thematic dot maps and control points on cartographic maps.

  19. A maximum power point tracking algorithm for buoy-rope-drum wave energy converters

    NASA Astrophysics Data System (ADS)

    Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.

    2016-08-01

    The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.

  20. An ISAR imaging algorithm for the space satellite based on empirical mode decomposition theory

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Dong, Chun-zhu

    2014-11-01

    Currently, high resolution imaging of the space satellite is a popular topic in the field of radar technology. In contrast with regular targets, the satellite target often moves along with its trajectory and simultaneously its solar panel substrate changes the direction toward the sun to obtain energy. Aiming at the imaging problem, a signal separating and imaging approach based on the empirical mode decomposition (EMD) theory is proposed, and the approach can realize separating the signal of two parts in the satellite target, the main body and the solar panel substrate and imaging for the target. The simulation experimentation can demonstrate the validity of the proposed method.

  1. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms.

    PubMed

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration. PMID:27036770

  2. Improved scaling of time-evolving block-decimation algorithm through reduced-rank randomized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Tamascelli, D.; Rosenbach, R.; Plenio, M. B.

    2015-06-01

    When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the time-evolving block-decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the singular value decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied and demonstrate that for those systems RRSVD delivers results as accurate as state-of-the-art deterministic SVD routines.

  3. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    NASA Technical Reports Server (NTRS)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  4. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  5. Parallel algorithm of generating set points for a manipulator with straight line and circular motions

    NASA Astrophysics Data System (ADS)

    Lai, Jim Z. C.; Chao, Ming

    1992-06-01

    A parallel algorithm of generating set points in Cartesian space for a manipulator with straight-line and circular motions is described. This algorithm is developed for parallel computation and does not have the problem of the wobbling approach vector that affects many techniques. When the scheme is executed serially, the computing time is about two-thirds that of the conventional technique.

  6. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation.

    PubMed

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-11-30

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model.

  7. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation

    PubMed Central

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  8. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation.

    PubMed

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  9. A path-following interior-point algorithm for linear and quadratic problems

    SciTech Connect

    Wright, S.J.

    1993-12-01

    We describe an algorithm for the monotone linear complementarity problem that converges for many positive, not necessarily feasible, starting point and exhibits polynomial complexity if some additional assumptions are made on the starting point. If the problem has a strictly complementary solution, the method converges subquadratically. We show that the algorithm and its convergence extend readily to the mixed monotone linear complementarity problem and, hence, to all the usual formulations of the linear programming and convex quadratic programming problems.

  10. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT.

    PubMed

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from to , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  11. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  12. Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs

    NASA Astrophysics Data System (ADS)

    Nikolić, Zoran; Nguyen, Ha Thai; Frantz, Gene

    2007-12-01

    Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs) to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.

  13. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature. PMID:12502302

  14. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature.

  15. Algorithmic implementations of domain decomposition methods for the diffraction simulation of advanced photomasks

    NASA Astrophysics Data System (ADS)

    Adam, Konstantinos; Neureuther, Andrew R.

    2002-07-01

    The domain decomposition method developed in [1] is examined in more detail. This method enables rapid computer simulation of advanced photomask (alt. PSM, masks with OPC) scattering and transmission properties. Compared to 3D computer simulation, speed-up factors of approximately 400, and up to approximately 200,000 when using the look-up table approach, are possible. Combined with the spatial frequency properties of projection printing systems, it facilitates accurate computer simulation of the projected image (normalized mean square error of a typical image is only a fraction of 1%). Some esoteric accuracy issues of the method are addressed and the way to handle arbitrary, Manhattan-type mask layouts is presented. The method is shown to be valid for off-axis incidence. The cross-talk model developed in [1] is used in 3D mask simulations (2D layouts).

  16. Iterative most-likely point registration (IMLP): a robust algorithm for computing optimal shape alignment.

    PubMed

    Billings, Seth D; Boctor, Emad M; Taylor, Russell H

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP's probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes.

  17. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    PubMed Central

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  18. Parameter Space of Fixed Points of the Damped Driven Pendulum Susceptible to Control of Chaos Algorithms

    NASA Astrophysics Data System (ADS)

    Dittmore, Andrew; Trail, Collin; Olsen, Thomas; Wiener, Richard J.

    2003-11-01

    We have previously demonstrated the experimental control of chaos in a Modified Taylor-Couette system with hourglass geometry( Richard J. Wiener et al), Phys. Rev. Lett. 83, 2340 (1999).. Identifying fixed points susceptible to algorithms for the control of chaos is key. We seek to learn about this process in the accessible numerical model of the damped, driven pendulum. Following Baker(Gregory L. Baker, Am. J. Phys. 63), 832 (1995)., we seek points susceptible to the OGY(E. Ott, C. Grebogi, and J. A. Yorke, Phys. Rev. Lett. 64), 1196 (1990). algorithm. We automate the search for fixed points that are candidates for control. We present comparisons of the space of candidate fixed points with the bifurcation diagrams and Poincare sections of the system. We demonstrate control at fixed points which do not appear on the attractor. We also show that the control algorithm may be employed to shift the system between non-communicating branches of the attractor.

  19. A Gabor subband decomposition ICA and MRF hybrid algorithm for infrared image reconstruction from subpixel shifted sequences

    NASA Astrophysics Data System (ADS)

    Yi-nan, Chen; Wei-qi, Jin; Ling-Xue, Wang; Lei, Zhao; Hong-sheng, Yu

    2009-03-01

    An image blind reconstruction, as a blind source separation problem, has been solved recently by independent component analysis (ICA). Based on ICA theory, in this paper, a high resolution image is reconstructed from low resolution and subpixel shifted sequences captured by infrared microscan imaging system. The algorithm has the attractive feature that neither the prior knowledge of the blur kernel nor the value of subpixel misregistrations between the input channels is required. The statistical independence in the image domain is improved by the multiscale Gabor subband decompositions, which are designed for the best ability to cover the whole spatial frequency and to avoid overlapping between the subbands. The mutual information is employed to locate a subband with the least dependent components. In terms of MAP estimator, we combine the super-Gaussian with Markov random field to form a hybrid image distribution. This strategy helps to estimate the separating matrix reasonable to extract the sources with the image properties, that is, sharp enough as well as correlative in local area. The proposed algorithm is capable of performing high resolution image sources which are not strictly independent, and its viability is proved by the computer simulations and real experiments.

  20. Unsupervised classification of polarimetric SAR images using complex Wishart distribution based on H/α decomposition and algorithm evaluation

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Yang, Ran

    2007-11-01

    The authors introduce unsupervised wishart classification technique for fully polarimetric SAR data using H/α decomposition of POLSAR images. This paper we applied this technique to AIRSAR data of Flevoland, Netherlands. The most valuable in this paper is our evaluation. From the following tree aspects we evaluate the algorithm mentioned in this paper and the results it produced. (i) By calculating the Jeffries-Matusit Distance (J-M Distance) J mn between two classes, which represents the separation between classes, the property of this classifier is measured. J-M Distance is a measurement of average difference between Probability Distribution Function (PDF) of two classes. Usually J-M Distance is between 0 and 2, and the bigger J-M Distance represents that two classes has a good separation. This paper we have most J-M Distances 1.8-2.0, thus indicates good separation; (ii) According to the average entropy and alpha of each final class, the classification results are analyzed; (iii) by comparing the classification results with the ground truth, the classification algorithm is evaluated. The results have a good simulation of ground truth. Experiment in this paper, according to the measurement criterion, analysis and evaluation, demonstrates that the region of Flevoland is well classification and the method has the advantage of edge holding that in the case of non-smooth borders this advantage is helpful. Also this paper gives a better repeat time.

  1. Seismic small-scale discontinuity sparsity-constraint inversion method using a penalty decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jingtao; Peng, Suping; Du, Wenfeng

    2016-02-01

    We consider sparsity-constraint inversion method for detecting seismic small-scale discontinuities, such as edges, faults and cavities, which provide rich information about petroleum reservoirs. However, where there is karstification and interference caused by macro-scale fault systems, these seismic small-scale discontinuities are hard to identify when using currently available discontinuity-detection methods. In the subsurface, these small-scale discontinuities are separately and sparsely distributed and their seismic responses occupy a very small part of seismic image. Considering these sparsity and non-smooth features, we propose an effective L 2-L 0 norm model for improvement of their resolution. First, we apply a low-order plane-wave destruction method to eliminate macro-scale smooth events. Then, based the residual data, we use a nonlinear structure-enhancing filter to build a L 2-L 0 norm model. In searching for its solution, an efficient and fast convergent penalty decomposition method is employed. The proposed method can achieve a significant improvement in enhancing seismic small-scale discontinuities. Numerical experiment and field data application demonstrate the effectiveness and feasibility of the proposed method in studying the relevant geology of these reservoirs.

  2. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  3. A point-cloud-based multiview stereo algorithm for free-viewpoint video.

    PubMed

    Liu, Yebin; Dai, Qionghai; Xu, Wenli

    2010-01-01

    This paper presents a robust multiview stereo (MVS) algorithm for free-viewpoint video. Our MVS scheme is totally point-cloud-based and consists of three stages: point cloud extraction, merging, and meshing. To guarantee reconstruction accuracy, point clouds are first extracted according to a stereo matching metric which is robust to noise, occlusion, and lack of texture. Visual hull information, frontier points, and implicit points are then detected and fused with point fidelity information in the merging and meshing steps. All aspects of our method are designed to counteract potential challenges in MVS data sets for accurate and complete model reconstruction. Experimental results demonstrate that our technique produces the most competitive performance among current algorithms under sparse viewpoint setups according to both static and motion MVS data sets.

  4. A Jitter-Mitigating High Gain Antenna Pointing Algorithm for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia; Blaurock, Carl

    2007-01-01

    This paper details a High Gain Antenna (HGA) pointing algorithm which mitigates jitter during the motion of the antennas on the Solar Dynamics Observatory (SDO) spacecraft. SDO has two HGAs which point towards the Earth and send data to a ground station at a high rate. These antennas are required to track the ground station during the spacecraft Inertial and Science modes, which include periods of inertial Sunpointing as well as calibration slews. The HGAs also experience handoff seasons, where the antennas trade off between pointing at the ground station and pointing away from the Earth. The science instruments on SDO require fine Sun pointing and have a very low jitter tolerance. Analysis showed that the nominal tracking and slewing motions of the antennas cause enough jitter to exceed the HGA portion of the jitter budget. The HGA pointing control algorithm was expanded from its original form as a means to mitigate the jitter.

  5. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  6. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform.

    PubMed

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A; Kiefer, Richard; Rasmussen, Luke V; Pathak, Jyotishman; Denny, Joshua C; Thompson, William K

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  7. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform.

    PubMed

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A; Kiefer, Richard; Rasmussen, Luke V; Pathak, Jyotishman; Denny, Joshua C; Thompson, William K

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository.

  8. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform

    PubMed Central

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  9. Performance Evaluation of Different Ground Filtering Algorithms for Uav-Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Serifoglu, C.; Gungor, O.; Yilmaz, V.

    2016-06-01

    Digital Elevation Model (DEM) generation is one of the leading application areas in geomatics. Since a DEM represents the bare earth surface, the very first step of generating a DEM is to separate the ground and non-ground points, which is called ground filtering. Once the point cloud is filtered, the ground points are interpolated to generate the DEM. LiDAR (Light Detection and Ranging) point clouds have been used in many applications thanks to their success in representing the objects they belong to. Hence, in the literature, various ground filtering algorithms have been reported to filter the LiDAR data. Since the LiDAR data acquisition is still a costly process, using point clouds generated from the UAV images to produce DEMs is a reasonable alternative. In this study, point clouds with three different densities were generated from the aerial photos taken from a UAV (Unmanned Aerial Vehicle) to examine the effect of point density on filtering performance. The point clouds were then filtered by means of five different ground filtering algorithms as Progressive Morphological 1D (PM1D), Progressive Morphological 2D (PM2D), Maximum Local Slope (MLS), Elevation Threshold with Expand Window (ETEW) and Adaptive TIN (ATIN). The filtering performance of each algorithm was investigated qualitatively and quantitatively. The results indicated that the ATIN and PM2D algorithms showed the best overall ground filtering performances. The MLS and ETEW algorithms were found as the least successful ones. It was concluded that the point clouds generated from the UAVs can be a good alternative for LiDAR data.

  10. Peak load demand forecasting using two-level discrete wavelet decomposition and neural network algorithm

    NASA Astrophysics Data System (ADS)

    Bunnoon, Pituk; Chalermyanont, Kusumal; Limsakul, Chusak

    2010-02-01

    This paper proposed the discrete transform and neural network algorithms to obtain the monthly peak load demand in mid term load forecasting. The mother wavelet daubechies2 (db2) is employed to decomposed, high pass filter and low pass filter signals from the original signal before using feed forward back propagation neural network to determine the forecasting results. The historical data records in 1997-2007 of Electricity Generating Authority of Thailand (EGAT) is used as reference. In this study, historical information of peak load demand(MW), mean temperature(Tmean), consumer price index (CPI), and industrial index (economic:IDI) are used as feature inputs of the network. The experimental results show that the Mean Absolute Percentage Error (MAPE) is approximately 4.32%. This forecasting results can be used for fuel planning and unit commitment of the power system in the future.

  11. An optimized structure on FPGA of key point description in SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Chenyu; Peng, Jinlong; Zhu, En; Zou, Yuxin

    2015-12-01

    SIFT algorithm is one of the most significant and effective algorithms to describe the features of image in the field of image matching. To implement SIFT algorithm to hardware environment is apparently considerable and difficult. In this paper, we mainly discuss the realization of Key Point Description in SIFT algorithm, along with Matching process. In Key Point Description, we have proposed a new method of generating histograms, to avoid the rotation of adjacent regions and insure the rotational invariance. In Matching, we replace conventional Euclidean distance with Hamming distance. The results of the experiments fully prove that the structure we propose is real-time, accurate, and efficient. Future work is still needed to improve its performance in harsher conditions.

  12. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  13. Change Detection from differential airborne LiDAR using a weighted Anisotropic Iterative Closest Point Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Kusari, A.; Glennie, C. L.; Oskin, M. E.; Hinojosa-Corona, A.; Borsa, A. A.; Arrowsmith, R.

    2013-12-01

    Differential LiDAR (Light Detection and Ranging) from repeated surveys has recently emerged as an effective tool to measure three-dimensional (3D) change for applications such as quantifying slip and spatially distributed warping associated with earthquake ruptures, and examining the spatial distribution of beach erosion after hurricane impact. Currently, the primary method for determining 3D change is through the use of the iterative closest point (ICP) algorithm and its variants. However, all current studies using ICP have assumed that all LiDAR points in the compared point clouds have uniform accuracy. This assumption is simplistic given that the error for each LiDAR point is variable, and dependent upon highly variable factors such as target range, angle of incidence, and aircraft trajectory accuracy. Therefore, to rigorously determine spatial change, it would be ideal to model the random error for every LiDAR observation in the differential point cloud, and use these error estimates as apriori weights in the ICP algorithm. To test this approach, we implemented a rigorous LiDAR observation error propagation method to generate estimated random error for each point in a LiDAR point cloud, and then determine 3D displacements between two point clouds using an anistropic weighted ICP algorithm. The algorithm was evaluated by qualitatively and quantitatively comparing post earthquake slip estimates from the 2010 El Mayor-Cucapah Earthquake between a uniform weight and anistropically weighted ICP algorithm, using pre-event LiDAR collected in 2006 by Instituto Nacional de Estadística y Geografía (INEGI), and post-event LiDAR collected by The National Center for Airborne Laser Mapping (NCALM).

  14. Modeling heterogeneous materials via two-point correlation functions. II. Algorithmic details and applications.

    PubMed

    Jiao, Y; Stillinger, F H; Torquato, S

    2008-03-01

    In the first part of this series of two papers, we proposed a theoretical formalism that enables one to model and categorize heterogeneous materials (media) via two-point correlation functions S(2) and introduced an efficient heterogeneous-medium (re)construction algorithm called the "lattice-point" algorithm. Here we discuss the algorithmic details of the lattice-point procedure and an algorithm modification using surface optimization to further speed up the (re)construction process. The importance of the error tolerance, which indicates to what accuracy the media are (re)constructed, is also emphasized and discussed. We apply the algorithm to generate three-dimensional digitized realizations of a Fontainebleau sandstone and a boron-carbide/aluminum composite from the two-dimensional tomographic images of their slices through the materials. To ascertain whether the information contained in S(2) is sufficient to capture the salient structural features, we compute the two-point cluster functions of the media, which are superior signatures of the microstructure because they incorporate topological connectedness information. We also study the reconstruction of a binary laser-speckle pattern in two dimensions, in which the algorithm fails to reproduce the pattern accurately. We conclude that in general reconstructions using S(2) only work well for heterogeneous materials with single-scale structures. However, two-point information via S(2) is not sufficient to accurately model multiscale random media. Moreover, we construct realizations of hypothetical materials with desired structural characteristics obtained by manipulating their two-point correlation functions.

  15. Prostate tissue decomposition via DECT using the model based iterative image reconstruction algorithm DIRA

    NASA Astrophysics Data System (ADS)

    Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Westin, Robin; Alm Carlsson, Gudrun

    2014-03-01

    Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.

  16. Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm

    NASA Astrophysics Data System (ADS)

    Kaczałek, B.; Borkowski, A.

    2016-06-01

    The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.

  17. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization.

  18. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  19. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration

    PubMed Central

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  20. Modified Cholesky factorizations in interior-point algorithms for linear programming.

    SciTech Connect

    Wright, S.; Mathematics and Computer Science

    1999-01-01

    We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.

  1. A hybrid algorithm for multiple change-point detection in continuous measurements

    NASA Astrophysics Data System (ADS)

    Priyadarshana, W. J. R. M.; Polushina, T.; Sofronov, G.

    2013-10-01

    Array comparative genomic hybridization (aCGH) is one of the techniques that can be used to detect copy number variations in DNA sequences. It has been identified that abrupt changes in the human genome play a vital role in the progression and development of many diseases. We propose a hybrid algorithm that utilizes both the sequential techniques and the Cross-Entropy method to estimate the number of change points as well as their locations in aCGH data. We applied the proposed hybrid algorithm to both artificially generated data and real data to illustrate the usefulness of the methodology. Our results show that the proposed algorithm is an effective method to detect multiple change-points in continuous measurements.

  2. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm

    PubMed Central

    Mora-Pascual, Jerónimo M.; García-García, Alberto; Martínez-González, Pablo

    2016-01-01

    The Iterative Closest Point (ICP) algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results. PMID:27768714

  3. Point process algorithm: a new Bayesian approach for TPF-I planet signal extraction

    NASA Technical Reports Server (NTRS)

    Velusamy, T.; Marsh, K. A.; Ware, B.

    2005-01-01

    TPF-I capability for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm, a Bayesian technique for estracting planetary signals using the sine/cosine chopped outputs of a dual nulling interferometer.

  4. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  5. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  6. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  7. An affine point-set and line invariant algorithm for photo-identification of gray whales

    NASA Astrophysics Data System (ADS)

    Chandan, Chandan; Kehtarnavaz, Nasser; Hillman, Gilbert; Wursig, Bernd

    2004-05-01

    This paper presents an affine point-set and line invariant algorithm within a statistical framework, and its application to photo-identification of gray whales (Eschrichtius robustus). White patches (blotches) appearing on a gray whale's left and right flukes (the flattened broad paddle-like tail) constitute unique identifying features and have been used here for individual identification. The fluke area is extracted from a fluke image via the live-wire edge detection algorithm, followed by optimal thresholding of the fluke area to obtain the blotches. Affine point-set and line invariants of the blotch points are extracted based on three reference points, namely the left and right tips and the middle notch-like point on the fluke. A set of statistics is derived from the invariant values and used as the feature vector representing a database image. The database images are then ranked depending on the degree of similarity between a query and database feature vectors. The results show that the use of this algorithm leads to a reduction in the amount of manual search that is normally done by marine biologists.

  8. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information

    PubMed Central

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  9. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information.

    PubMed

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  10. A rapid and robust iterative closest point algorithm for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Barbiere, Joseph; Hanley, Joseph

    2008-03-01

    Our work presents a rapid and robust process that can analytically evaluate and correct patient setup error for head and neck radiotherapy by comparing orthogonal megavoltage portal images with digitally reconstructed radiographs. For robust data Photoshop is used to interactively segment images and registering reference contours to the transformed PI. MatLab is used for matrix computations and image analysis. The closest point distance for each PI point to a DRR point forms a set of homologous points. The translation that aligns the PI to the DRR is equal to the difference in centers of mass. The original PI points are transformed and the process repeated with an Iterative Closest Point algorithm until the transformation change becomes negligible. Using a 3.00 GHz processor the calculation of the 2500x1750 CPD matrix takes about 150 sec per iteration. Standard down sampling to about 1000 DRR and 250 PI points significantly reduces that time. We introduce a local neighborhood matrix consisting of a small subset of the DRR points in the vicinity of each PI point to further reduce the CPD matrix size. Our results demonstrate the effects of down sampling on accuracy. For validation, analytical detailed results are displayed as a histogram.

  11. An algorithm for minimum-cost set-point ordering in a cryogenic wind tunnel

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1981-01-01

    An algorithm for minimum cost ordering of set points in a cryogenic wind tunnel is developed. The procedure generates a matrix of dynamic state transition costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-cost state-transition control strategies. A branch and bound algorithm is employed to determine the least costly sequence of state transitions from the transition-cost matrix. Some numerical results based on data for the National Transonic Facility are presented which show a strong preference for state transitions that consume to coolant. Results also show that the choice of the terminal set point in an open odering can produce a wide variation in total cost.

  12. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    NASA Technical Reports Server (NTRS)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  13. Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Ianculescu, G. D.; Klop, J. J.

    1992-01-01

    Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom are designed using a continuous rigid body model of the solar array gimbal assembly containing both linear and nonlinear dynamics due to various friction components. The robustness of the design solution is examined by performing a series of sensitivity analysis studies. Adaptive control strategies are examined in order to compensate for the unfavorable effect of static nonlinearities, such as dead-zone uncertainties.

  14. Thickness Gauging of Single-Layer Conductive Materials with Two-Point Non Linear Calibration Algorithm

    NASA Technical Reports Server (NTRS)

    Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)

    1998-01-01

    A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.

  15. Extension of an iterative closest point algorithm for simultaneous localization and mapping in corridor environments

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua

    2016-03-01

    Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.

  16. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    NASA Astrophysics Data System (ADS)

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-08-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.

  17. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    PubMed Central

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  18. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions.

    PubMed

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A; Bishop, Logan D C; Kelly, Kevin F; Landes, Christy F

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.

  19. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions.

    PubMed

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A; Bishop, Logan D C; Kelly, Kevin F; Landes, Christy F

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  20. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  1. Thermal decomposition of energetic materials. 5. reaction processes of 1,3,5-trinitrohexahydro-s-triazine below its melting point.

    PubMed

    Maharrey, Sean; Behrens, Richard

    2005-12-15

    Through the use of simultaneous thermogravimetry modulated beam mass spectrometry, optical microscopy, hot-stage time-lapsed microscopy, and scanning electron microscopy measurements, the physical and chemical processes that control the thermal decomposition of 1,3,5-trinitrohexahydro-s-triazine (RDX) below its melting point (160-189 degrees C) have been identified. Two gas-phase reactions of RDX are predominant during the early stages of an experiment. One involves the loss of HONO and HNO and leads to the formation of H2O, NO, NO2, and oxy-s-triazine (OST) or s-triazine. The other involves the reaction of NO with RDX to form NO2 and 1-nitroso-3,5-dinitrohexahydro-s-triazine (ONDNTA), which subsequently decomposes to form a set of products of which CH2O and N2O are the most abundant. Products from the gas-phase RDX decomposition reactions, such as ONDNTA, deposit on the surface of the RDX particles and lead to the development of a new set of reaction pathways that occur on the surface of the RDX particles. The initial surface reactions occur on surfaces of those RDX particles in the sample that can accumulate the greatest amount of products from the gas-phase reactions. Initial surface reactions are characterized by the formation of islands of reactivity on the RDX surface and lead to the development of an orange-colored nonvolatile residue (NVR) film on the surface of the RDX particles. The NVR film is most likely formed via the decomposition of ONDNTA on the surface of the RDX particles. The NVR film is a nonstoichiometric and dynamic material, which reacts directly with RDX and ONDNTA, and is composed of remnants from RDX and ONDNTA molecules that have reacted with the NVR. Reactions involving the NVR become dominant during the later stage of the decomposition process. The NVR reacts with RDX to form ONDNTA via abstraction of an oxygen atom from an NO2 group. ONDNTA may undergo rapid loss of N2 and NO2 with the remaining portion of the molecule being

  2. TU-F-18A-04: Use of An Image-Based Material-Decomposition Algorithm for Multi-Energy CT to Determine Basis Material Densities

    SciTech Connect

    Li, Z; Leng, S; Yu, L; McCollough, C

    2014-06-15

    Purpose: Published methods for image-based material decomposition with multi-energy CT images have required the assumption of volume conservation or accurate knowledge of the x-ray spectra and detector response. The purpose of this work was to develop an image-based material-decomposition algorithm that can overcome these limitations. Methods: An image-based material decomposition algorithm was developed that requires only mass conservation (rather than volume conservation). With this method, using multi-energy CT measurements made with n=4 energy bins, the mass density of each basis material and of the mixture can be determined without knowledge of the tube spectra and detector response. A digital phantom containing 12 samples of mixtures from water, calcium, iron, and iodine was used in the simulation (Siemens DRASIM). The calibration was performed by using pure materials at each energy bin. The accuracy of the technique was evaluated in noise-free and noisy data under the assumption of an ideal photon-counting detector. Results: Basis material densities can be estimated accurately by either theoretic calculation or calibration with known pure materials. The calibration approach requires no prior information about the spectra and detector response. Regression analysis of theoretical values versus estimated values results in excellent agreement for both noise-free and noisy data. For the calibration approach, the R-square values are 0.9960+/−0.0025 and 0.9476+/−0.0363 for noise-free and noisy data, respectively. Conclusion: From multi-energy CT images with n=4 energy bins, the developed image-based material decomposition method accurately estimated 4 basis material density (3 without k-edge and 1 with in the range of the simulated energy bins) even without any prior information about spectra and detector response. This method is applicable to mixtures of solutions and dissolvable materials, where volume conservation assumptions do not apply. CHM receives

  3. An Error Analysis of the Phased Array Antenna Pointing Algorithm for STARS Flight Demonstration No. 2

    NASA Technical Reports Server (NTRS)

    Carney, Michael P.; Simpson, James C.

    2005-01-01

    STARS is a multicenter NASA project to determine the feasibility of using space-based assets, such as the Tracking and Data Relay Satellite System (TDRSS) and Global Positioning System (GPS), to increase flexibility (e.g. increase the number of possible launch locations and manage simultaneous operations) and to reduce operational costs by decreasing the need for ground-based range assets and infrastructure. The STARS project includes two major systems: the Range Safety and Range User systems. The latter system uses broadband communications (125 kbps to 500 kbps) for voice, video, and vehicle/payload data. Flight Demonstration #1 revealed the need to increase the data rate of the Range User system. During Flight Demo #2, a Ku-band antenna will generate a higher data rate and will be designed with an embedded pointing algorithm to guarantee that the antenna is pointed directly at TDRS. This algorithm will utilize the onboard position and attitude data to point the antenna to TDRS within a 2-degree full-angle beamwidth. This report investigates how errors in aircraft position and attitude, along with errors in satellite position, propagate into the overall pointing vector.

  4. Floating-Point Units and Algorithms for field-programmable gate arrays

    SciTech Connect

    Underwood, Keith D.; Hemmert, K. Scott

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and we are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and

  5. Floating-Point Units and Algorithms for field-programmable gate arrays

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of themore » BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and we are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used

  6. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  7. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean

    2016-05-01

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  8. iPoint: an integer programming based algorithm for inferring protein subnetworks.

    PubMed

    Atias, Nir; Sharan, Roded

    2013-07-01

    Large scale screening experiments have become the workhorse of molecular biology, producing data at an ever increasing scale. The interpretation of such data, particularly in the context of a protein interaction network, has the potential to shed light on the molecular pathways underlying the phenotype or the process in question. A host of approaches have been developed in recent years to tackle this reconstruction challenge. These approaches aim to infer a compact subnetwork that connects the genes revealed by the screen while optimizing local (individual path lengths) or global (likelihood) aspects of the subnetwork. Yosef et al. [Mol. Syst. Biol., 2009, 5, 248] were the first to provide a joint optimization of both criteria, albeit approximate in nature. Here we devise an integer linear programming formulation for the joint optimization problem, allowing us to solve it to optimality in minutes on current networks. We apply our algorithm, iPoint, to various data sets in yeast and human and evaluate its performance against state-of-the-art algorithms. We show that iPoint attains very compact and accurate solutions that outperform previous network inference algorithms with respect to their local and global attributes, their consistency across multiple experiments targeting the same pathway, and their agreement with current biological knowledge.

  9. Artifact Removal from Biosignal using Fixed Point ICA Algorithm for Pre-processing in Biometric Recognition

    NASA Astrophysics Data System (ADS)

    Mishra, Puneet; Singla, Sunil Kumar

    2013-01-01

    In the modern world of automation, biological signals, especially Electroencephalogram (EEG) and Electrocardiogram (ECG), are gaining wide attention as a source of biometric information. Earlier studies have shown that EEG and ECG show versatility with individuals and every individual has distinct EEG and ECG spectrum. EEG (which can be recorded from the scalp due to the effect of millions of neurons) may contain noise signals such as eye blink, eye movement, muscular movement, line noise, etc. Similarly, ECG may contain artifact like line noise, tremor artifacts, baseline wandering, etc. These noise signals are required to be separated from the EEG and ECG signals to obtain the accurate results. This paper proposes a technique for the removal of eye blink artifact from EEG and ECG signal using fixed point or FastICA algorithm of Independent Component Analysis (ICA). For validation, FastICA algorithm has been applied to synthetic signal prepared by adding random noise to the Electrocardiogram (ECG) signal. FastICA algorithm separates the signal into two independent components, i.e. ECG pure and artifact signal. Similarly, the same algorithm has been applied to remove the artifacts (Electrooculogram or eye blink) from the EEG signal.

  10. Analytical evaluation of algorithms for point cloud surface reconstruction using shape features

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Verbeek, Fons J.

    2013-10-01

    In computer vision and graphics, reconstruction of a three-dimensional surface from a point cloud is a well-studied research area. As the surface contains information that can be measured, the application of surface reconstruction may be potentially important for applications in bioimaging. In the past decade, a number of algorithms for surface reconstruction have been developed. Generally speaking, these algorithms can be separated into two categories: explicit representation and implicit approximation. Most of these algorithms have a sound basis in mathematical theory. However, so far, no analytical evaluation between these algorithms has been presented. The straightforward method of evaluation has been by convincing through visual inspection. Therefore, we design an analytical approach by selecting surface distance, surface area, and surface curvature as three major surface descriptors. We evaluate these features in varied conditions. Our ground truth values are obtained from analytical shapes: the sphere, the ellipsoid, and the oval. Through evaluation we search for a method that can preserve the surface characteristics best and which is robust in the presence of noise. The results obtained from our experiments indicate that Poisson reconstruction method performs best. This outcome can now be used to produce reliable surface reconstruction of biological models.

  11. Dynamic connectivity detection: an algorithm for determining functional connectivity change points in fMRI data.

    PubMed

    Xu, Yuting; Lindquist, Martin A

    2015-01-01

    Recently there has been an increased interest in using fMRI data to study the dynamic nature of brain connectivity. In this setting, the activity in a set of regions of interest (ROIs) is often modeled using a multivariate Gaussian distribution, with a mean vector and covariance matrix that are allowed to vary as the experiment progresses, representing changing brain states. In this work, we introduce the Dynamic Connectivity Detection (DCD) algorithm, which is a data-driven technique to detect temporal change points in functional connectivity, and estimate a graph between ROIs for data within each segment defined by the change points. DCD builds upon the framework of the recently developed Dynamic Connectivity Regression (DCR) algorithm, which has proven efficient at detecting changes in connectivity for problems consisting of a small to medium (< 50) number of regions, but which runs into computational problems as the number of regions becomes large (>100). The newly proposed DCD method is faster, requires less user input, and is better able to handle high-dimensional data. It overcomes the shortcomings of DCR by adopting a simplified sparse matrix estimation approach and a different hypothesis testing procedure to determine change points. The application of DCD to simulated data, as well as fMRI data, illustrates the efficacy of the proposed method.

  12. [An Improved Empirical Mode Decomposition Algorithm for Phonocardiogram Signal De-noising and Its Application in S1/S2 Extraction].

    PubMed

    Gong, Jing; Nie, Shengdong; Wang, Yuanjun

    2015-10-01

    In this paper, an improved empirical mode decomposition (EMD) algorithm for phonocardiogram (PCG) signal de-noising is proposed. Based on PCG signal processing theory, the S1/S2 components can be extracted by combining the improved EMD-Wavelet algorithm and Shannon energy envelope algorithm. Firstly, by applying EMD-Wavelet algorithm for pre-processing, the PCG signal was well filtered. Then, the filtered PCG signal was saved and applied in the following processing steps. Secondly, time domain features, frequency domain features and energy envelope of the each intrinsic mode function's (IMF) were computed. Based on the time frequency domain features of PCG's IMF components which were extracted from the EMD algorithm and energy envelope of the PCG, the S1/S2 components were pinpointed accurately. Meanwhile, a detecting fixed method, which was based on the time domain processing, was proposed to amend the detection results. Finally, to test the performance of the algorithm proposed in this paper, a series of experiments was contrived. The experiments with thirty samples were tested for validating the effectiveness of the new method. Results of test experiments revealed that the accuracy for recognizing S1/S2 components was as high as 99.75%. Comparing the results of the method proposed in this paper with those of traditional algorithm, the detection accuracy was increased by 5.56%. The detection results showed that the algorithm described in this paper was effective and accurate. The work described in this paper will be utilized in the further studying on identity recognition.

  13. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    NASA Astrophysics Data System (ADS)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  14. Comparison of dermatoscopic diagnostic algorithms based on calculation: The ABCD rule of dermatoscopy, the seven-point checklist, the three-point checklist and the CASH algorithm in dermatoscopic evaluation of melanocytic lesions.

    PubMed

    Unlu, Ezgi; Akay, Bengu N; Erdem, Cengizhan

    2014-07-01

    Dermatoscopic analysis of melanocytic lesions using the CASH algorithm has rarely been described in the literature. The purpose of this study was to compare the sensitivity, specificity, and diagnostic accuracy rates of the ABCD rule of dermatoscopy, the seven-point checklist, the three-point checklist, and the CASH algorithm in the diagnosis and dermatoscopic evaluation of melanocytic lesions on the hairy skin. One hundred and fifteen melanocytic lesions of 115 patients were examined retrospectively using dermatoscopic images and compared with the histopathologic diagnosis. Four dermatoscopic algorithms were carried out for all lesions. The ABCD rule of dermatoscopy showed sensitivity of 91.6%, specificity of 60.4%, and diagnostic accuracy of 66.9%. The seven-point checklist showed sensitivity, specificity, and diagnostic accuracy of 87.5, 65.9, and 70.4%, respectively; the three-point checklist 79.1, 62.6, 66%; and the CASH algorithm 91.6, 64.8, and 70.4%, respectively. To our knowledge, this is the first study that compares the sensitivity, specificity and diagnostic accuracy of the ABCD rule of dermatoscopy, the three-point checklist, the seven-point checklist, and the CASH algorithm for the diagnosis of melanocytic lesions on the hairy skin. In our study, the ABCD rule of dermatoscopy and the CASH algorithm showed the highest sensitivity for the diagnosis of melanoma.

  15. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  16. Algorithms for projecting a point onto a level surface of a continuous function on a compact set

    NASA Astrophysics Data System (ADS)

    Arutyunova, N. K.; Dulliev, A. M.; Zabotin, V. I.

    2014-09-01

    Given an equation f( x) = 0, the problem of finding its solution nearest to a given point is considered. In contrast to the authors' previous works dealing with this problem, exact algorithms are proposed assuming that the function f is continuous on a compact set. The convergence of the algorithms is proved, and their performance is illustrated with test examples.

  17. Using SDO and GONG as Calibration References for a New Telescope Pointing Algorithm

    NASA Astrophysics Data System (ADS)

    Staiger, J.

    2013-12-01

    Long duration observations are a basic requirement for most types of helioseismic measurements. Pointing stability and the quality of guiding is thus an important issue with respect to the spatio-temporal analysis of any velocity datasets. Existing pointing tools and correlation-tracking devices will help to remove most of the spatial deviations building up during an observation with time. Yet most ground- and space-based high-resolution solar telescopes may be subject to slow image-plane drift that cannot be compensated for by guiding and which may accumulate to displacements of 10″ or more during a 10-hour recording. We have developed a new pointing model for solar telescopes that may overcome these inherent guiding-limitations. We have tested the model at the Vacuum Tower Telescope (VTT), Tenerife. We are using SDO and GONG full-disk imaging as a calibration reference. We describe the algorithms developed and used during the tests. We present our first results. We describe possible future applications as to be implemented at the VTT. So far, improvements over classical limb-guider systems by a factor of 10 or more seem possible.

  18. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    SciTech Connect

    Poynee, L A

    2003-05-06

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation.

  19. GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie

    2016-04-01

    Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.

  20. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  1. Joint inversion of T1-T2 spectrum combining the iterative truncated singular value decomposition and the parallel particle swarm optimization algorithms

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Wang, Hua; Fan, Yiren; Cao, Yingchang; Chen, Hua; Huang, Rui

    2016-01-01

    With more information than the conventional one dimensional (1D) longitudinal relaxation time (T1) and transversal relaxation time (T2) spectrums, a two dimensional (2D) T1-T2 spectrum in a low field nuclear magnetic resonance (NMR) is developed to discriminate the relaxation components of fluids such as water, oil and gas in porous rock. However, the accuracy and efficiency of the T1-T2 spectrum are limited by the existing inversion algorithms and data acquisition schemes. We introduce a joint method to inverse the T1-T2 spectrum, which combines iterative truncated singular value decomposition (TSVD) and a parallel particle swarm optimization (PSO) algorithm to get fast computational speed and stable solutions. We reorganize the first kind Fredholm integral equation of two kernels to a nonlinear optimization problem with non-negative constraints, and then solve the ill-conditioned problem by the iterative TSVD. Truncating positions of the two diagonal matrices are obtained by the Akaike information criterion (AIC). With the initial values obtained by TSVD, we use a PSO with parallel structure to get the global optimal solutions with a high computational speed. We use the synthetic data with different signal to noise ratio (SNR) to test the performance of the proposed method. The result shows that the new inversion algorithm can achieve favorable solutions for signals with SNR larger than 10, and the inversion precision increases with the decrease of the components of the porous rock.

  2. Sensitivity of passive microwave sea ice concentration algorithms to the selection of locally and seasonally adjusted tie points

    NASA Technical Reports Server (NTRS)

    Steffen, Konrad; Schweiger, Axel

    1989-01-01

    The sensitivity of passive microwave sea-ice concentration (SIC) algorithms to the selection of tie points was analyzed. SICs were derived with the NASA Team ice algorithm for global tie points and for locally and seasonally adjusted tie points. The SSM/I SIC was then compared to Landsat-MSS-derived SICs. Preliminary results show a mean difference of SSM/I- and Landsat-derived SICs for 50 x 50 km grid cells of 2.7 percent along the ice edge of the Beaufort Sea during fall with local tie points. The accuracy decreased to 9.7 percent when global tie points were used. During freeze-up in the Beaufort Sea, with grey ice and nilas as dominant ice cover, the mean difference was 4.3 percent for local tie points and 13.9 percent for global tie points. For the spring ice cover in the Bering Sea a mean difference of 4.4 percent for local tie points and 15.7 percent for global tie points was found. This large difference reveals some limitations of the NASA-Team algorithm under freeze-up and spring conditions (thin ice areas).

  3. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  4. Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data

    NASA Astrophysics Data System (ADS)

    Alcolea, Andres; Renard, Philippe

    2010-08-01

    Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations

  5. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  6. Hybrid algorithm for common solution of monotone inclusion problem and fixed point problem and applications to variational inequalities.

    PubMed

    Zhang, Jingling; Jiang, Nan

    2016-01-01

    The aim of this paper is to investigate hybrid algorithm for a common zero point of the sum of two monotone operators which is also a fixed point of a family of countable quasi-nonexpansive mappings. We point out two incorrect proof in paper (Hecai in Fixed Point Theory Appl 2013:11, 2013). Further, we modify and generalize the results of Hecai's paper, in which only a quasi-nonexpansive mapping was considered. In addition, two family of countable quasi-nonexpansive mappings with uniform closeness examples are provided to demonstrate our results. Finally, the results are applied to variational inequalities.

  7. Current review and a simplified "five-point management algorithm" for keratoconus.

    PubMed

    Shetty, Rohit; Kaweri, Luci; Pahuja, Natasha; Nagaraja, Harsha; Wadia, Kareeshma; Jayadev, Chaitra; Nuijts, Rudy; Arora, Vishal

    2015-01-01

    Keratoconus is a slowly progressive, noninflammatory ectatic corneal disease characterized by changes in corneal collagen structure and organization. Though the etiology remains unknown, novel techniques are continuously emerging for the diagnosis and management of the disease. Demographical parameters are known to affect the rate of progression of the disease. Common methods of vision correction for keratoconus range from spectacles and rigid gas-permeable contact lenses to other specialized lenses such as piggyback, Rose-K or Boston scleral lenses. Corneal collagen cross-linking is effective in stabilizing the progression of the disease. Intra-corneal ring segments can improve vision by flattening the cornea in patients with mild to moderate keratoconus. Topography-guided custom ablation treatment betters the quality of vision by correcting the refractive error and improving the contact lens fit. In advanced keratoconus with corneal scarring, lamellar or full thickness penetrating keratoplasty will be the treatment of choice. With such a wide spectrum of alternatives available, it is necessary to choose the best possible treatment option for each patient. Based on a brief review of the literature and our own studies we have designed a five-point management algorithm for the treatment of keratoconus. PMID:25686063

  8. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    presentehat shows the readback delay does not have a negative impact on gimbal control. The decision was made to consider implementing two of the jitter mitigation techniques on board the spacecraft: stagger stepping and the NSR. Flight data from two sets of handovers, one set without jitter mitigation and the other with mitigation enabled, were examined. The trajectory of the predicted handover was compared with the measured trajectory for the two cases, showing that tracking was not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. In this paper, the flight results are examined from a test where the HGAs are following the path of a nominal handover with stagger stepping on and HMI NSRs enabled. In this case, the reaction wheels are moving at low speed and the instruments are taking pictures in their standard sequence. The flight data shows the level of jitter that the instruments see when their shutters are open. The HGA-induced jitter is well within the jitter requirement when the stagger step and NSR mitigation options are enabled. The SDO HGA pointing algorithm was designed to achieve nominal antenna pointing at the ground station, perform slews during handover season, and provide three HGA-induced jitter mitigation options without compromising pointing objectives. During the commissioning phase, flight data sets were collected to verify the HGA pointing algorithm and demonstrate its jitter mitigation capabilities.

  9. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  10. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    SciTech Connect

    Chao, R.M.; Ko, S.H.; Lin, I.H.; Pai, F.S.; Chang, C.C.

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  11. A WAVELET-GALERKIN ALGORITHM OF THE E/B DECOMPOSITION OF COSMIC MICROWAVE BACKGROUND POLARIZATION MAPS

    SciTech Connect

    Cao Liang; Fang Lizhi

    2009-12-01

    We develop an algorithm of separating the E and B modes of the cosmic microwave background (CMB) polarization from the noisy and discretized maps of Stokes parameters Q and U in a finite area. A key step of the algorithm is to take a wavelet-Galerkin discretization of the differential relation between the E, B and Q, U fields. This discretization allows derivative operator to be represented by a matrix, which is exactly diagonal in scale space, and narrowly banded in spatial space. We show that the effect of boundary can be eliminated by dropping a few discrete wavelet transform modes, located on or nearby the boundary. This method reveals that the derivative operators will cause large errors in the E and B power spectra on small scales if the Q and U maps contain Gaussian noise. It also reveals that if the Q and U maps are random, these fields lead to the mixing of E and B modes. Consequently, the B mode will be contaminated if the powers of E modes are much larger than that of B modes. Nevertheless, numerical tests show that the power spectra of both E and B on scales larger than the finest scale by a factor of 4 and higher can reasonably be recovered, even when the power ratio of E to B modes is as large as about 10{sup 2}, and the signal-to-noise ratio is equal to 10 and higher. This is because the Galerkin discretization is free of false correlations and keeps the contamination under control. As wavelet variables contain information of both spatial and scale spaces, the developed method is also effective to recover the spatial structures of the E and B mode fields.

  12. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis

    NASA Astrophysics Data System (ADS)

    Portes, Leonardo L.; Aguirre, Luis A.

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011), 10.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA.

  13. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis.

    PubMed

    Portes, Leonardo L; Aguirre, Luis A

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011)PLEEE81539-375510.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA. PMID:27300889

  14. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis.

    PubMed

    Portes, Leonardo L; Aguirre, Luis A

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011)PLEEE81539-375510.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA.

  15. Building optimal regression tree by ant colony system-genetic algorithm: application to modeling of melting points.

    PubMed

    Hemmateenejad, Bahram; Shamsipur, Mojtaba; Zare-Shahabadi, Vali; Akhond, Morteza

    2011-10-17

    The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure.

  16. A Multi-core Shared Tree Algorithm Based on Network Coding for Multi-point Optical Multicast

    NASA Astrophysics Data System (ADS)

    Liu, Huanlin; Yang, Yuming; Li, Yuan; Chen, Yong; Huang, Sheng

    2015-03-01

    With the growth of multi-point to multi-point multicast applications, the optical network bandwidth resource consumption is increasing rapidly. It attracted more and more researchers to improve the limited wavelength bandwidth utilization for multicast applications in wavelength division multiplexing (WDM) networks. In the paper, a multi-core shared multicast tree algorithm based on network coding is proposed to minimize the fiber link stress. The proposed algorithm includes three processes: searching the core node candidate set excluding core node loop path, selecting the core nodes from the convergence matrix based on heuristic algorithm, and constructing the multi-core nodes shared trees. The convergence matrix based on the heuristic method is constructed for selecting the core nodes from candidate core node set. To improve the limited wavelength utilization, we introduce network coding into the shared tree to compress the transmitting information. The simulation results show that the proposed algorithm's performance is better than the existing algorithms' performance in terms of link stress and balance degree.

  17. Building optimal regression tree by ant colony system-genetic algorithm: application to modeling of melting points.

    PubMed

    Hemmateenejad, Bahram; Shamsipur, Mojtaba; Zare-Shahabadi, Vali; Akhond, Morteza

    2011-10-17

    The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure. PMID:21907021

  18. a New Control Points Based Geometric Correction Algorithm for Airborne Push Broom Scanner Images Without On-Board Data

    NASA Astrophysics Data System (ADS)

    Strakhov, P.; Badasen, E.; Shurygin, B.; Kondranin, T.

    2016-06-01

    Push broom scanners, such as video spectrometers (also called hyperspectral sensors), are widely used in the present. Usage of scanned images requires accurate geometric correction, which becomes complicated when imaging platform is airborne. This work contains detailed description of a new algorithm developed for processing of such images. The algorithm requires only user provided control points and is able to correct distortions caused by yaw, flight speed and height changes. It was tested on two series of airborne images and yielded RMS error values on the order of 7 meters (3-6 source image pixels) as compared to 13 meters for polynomial-based correction.

  19. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  20. The collapsed cone algorithm for 192Ir dosimetry using phantom-size adaptive multiple-scatter point kernels

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-01

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  1. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET.

    PubMed

    Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C

    2010-07-21

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  2. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET

    NASA Astrophysics Data System (ADS)

    Rapisarda, E.; Bettinardi, V.; Thielemans, K.; Gilardi, M. C.

    2010-07-01

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring 22Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the 22Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  3. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    NASA Astrophysics Data System (ADS)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  4. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    SciTech Connect

    Atanassov, E.; Dimitrov, D. E-mail: emanouil@parallel.bas.bg Gurov, T.

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  5. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    SciTech Connect

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill; Chand, Kyle

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.

  6. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  7. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  8. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  9. Decomposing Nekrasov decomposition

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Zenkevich, Y.

    2016-02-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair "interaction" is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  10. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  11. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  12. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates.

  13. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    NASA Astrophysics Data System (ADS)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  14. An Automatic Algorithm for Minimizing Anomalies and Discrepancies in Point Clouds Acquired by Laser Scanning Technique

    NASA Astrophysics Data System (ADS)

    Bordin, Fabiane; Gonzaga, Luiz, Jr.; Galhardo Muller, Fabricio; Veronez, Mauricio Roberto; Scaioni, Marco

    2016-06-01

    Laser scanning technique from airborne and land platforms has been largely used for collecting 3D data in large volumes in the field of geosciences. Furthermore, the laser pulse intensity has been widely exploited to analyze and classify rocks and biomass, and for carbon storage estimation. In general, a laser beam is emitted, collides with targets and only a percentage of emitted beam returns according to intrinsic properties of each target. Also, due interferences and partial collisions, the laser return intensity can be incorrect, introducing serious errors in classification and/or estimation processes. To address this problem and avoid misclassification and estimation errors, we have proposed a new algorithm to correct return intensity for laser scanning sensors. Different case studies have been used to evaluate and validated proposed approach.

  15. A Unique Computational Algorithm to Simulate Probabilistic Multi-Factor Interaction Model Complex Material Point Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2010-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points--the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  16. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    NASA Astrophysics Data System (ADS)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-01

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.

  17. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGES

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  18. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  19. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data

    PubMed Central

    Banda, Jorge A.; Haydel, K. Farish; Davila, Tania; Desai, Manisha; Haskell, William L.; Matheson, Donna; Robinson, Thomas N.

    2016-01-01

    Objective To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). Methods 268 7–11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4–7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. Results WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001), but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001). Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001). Conclusions The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy. PMID:26938240

  20. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  1. Gauge-invariant decomposition of nucleon spin

    SciTech Connect

    Wakamatsu, M.

    2010-06-01

    We investigate the relation between the known decompositions of the nucleon spin into its constituents, thereby clarifying in what respect they are common and in what respect they are different essentially. The decomposition recently proposed by Chen et al. can be thought of as a nontrivial generalization of the gauge-variant Jaffe-Manohar decomposition so as to meet the gauge-invariance requirement of each term of the decomposition. We however point out that there is another gauge-invariant decomposition of the nucleon spin, which is closer to the Ji decomposition, while allowing the decomposition of the gluon total angular momentum into the spin and orbital parts. After clarifying the reason why the gauge-invariant decomposition of the nucleon spin is not unique, we discuss which decomposition is more preferable from an experimental viewpoint.

  2. [Comparative Study on the Three Algorithms of T-wave End Detection: Wavelet Method, Cumulative Points Area Method and Trapezium Area Method].

    PubMed

    Li, Chengtao; Zhang, Yongliang; He, Zijun; Ye, Jun; Hu, Fusong; Ma, Zuchang; Wang, Jingzhi

    2015-12-01

    In order to find the most suitable algorithm of T-wave end point detection for clinical detection, we tested three methods, which are not just dependent on the threshold value of T-wave end point detection, i. e. wavelet method, cumulative point area method and trapezium area method, in PhysioNet QT database (20 records with 3 569 beats each). We analyzed and compared their detection performance. First, we used the wavelet method to locate the QRS complex and T-wave. Then we divided the T-wave into four morphologies, and we used the three algorithms mentioned above to detect T-wave end point. Finally, we proposed an adaptive selection T-wave end point detection algorithm based on T-wave morphology and tested it with experiments. The results showed that this adaptive selection method had better detection performance than that of the single T-wave end point detection algorithm. The sensitivity, positive predictive value and the average time errors were 98.93%, 99.11% and (--2.33 ± 19.70) ms, respectively. Consequently, it can be concluded that the adaptive selection algorithm based on T-wave morphology improves the efficiency of T-wave end point detection. PMID:27079084

  3. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO), launched in 2010, is a NASA-designed spacecraft built to study the Sun. SDO has tight pointing requirements and instruments that are sensitive to spacecraft jitter. Two High Gain Antennas (HGAs) are used to continuously send science data to a dedicated ground station. Preflight analysis showed that jitter resulting from motion of the HGAs was a cause for concern. Three jitter mitigation techniques were developed and implemented to overcome effects of jitter from different sources. These mitigation techniques include: the random step delay, stagger stepping, and the No Step Request (NSR). During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft, in which various sources of jitter were examined to determine their level of effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. The jitter levels were compared with the gimbal jitter allocations for each instrument. The decision was made to consider implementing two of the jitter mitigating techniques on board the spacecraft: stagger stepping and the NSR. Flight data with and without jitter mitigation enabled was examined, and it is shown in this paper that HGA tracking is not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. The HGA-induced jitter on the instruments is well within the jitter requirement when the stagger step and NSR mitigation options are enabled.

  4. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  5. Improved Prediction of Drug-Induced Torsades de Pointes Through Simulations of Dynamics and Machine Learning Algorithms.

    PubMed

    Lancaster, M Cummins; Sobie, E A

    2016-10-01

    The ventricular arrhythmia Torsades de Pointes (TdP) is a common form of drug-induced cardiotoxicity, but prediction of this arrhythmia remains an unresolved issue in drug development. Current assays to evaluate arrhythmia risk are limited by poor specificity and a lack of mechanistic insight. We addressed this important unresolved issue through a novel computational approach that combined simulations of drug effects on dynamics with statistical analysis and machine-learning. Drugs that blocked multiple ion channels were simulated in ventricular myocyte models, and metrics computed from the action potential and intracellular (Ca(2+) ) waveform were used to construct classifiers that distinguished between arrhythmogenic and nonarrhythmogenic drugs. We found that: (1) these classifiers provide superior risk prediction; (2) drug-induced changes to both the action potential and intracellular (Ca(2+) ) influence risk; and (3) cardiac ion channels not typically assessed may significantly affect risk. Our algorithm demonstrates the value of systematic simulations in predicting pharmacological toxicity.

  6. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p<0.01); and the correlation coefficient of tree crown volume between V(VC) derived from new method and V(C) by the formula of a regular body is 0.960 (p<0.001). The results also show that the average of V(C) is smaller than that of V(VC) at the rate of 8.03%, and the average of A4 is larger than that of A(V) at the rate of 25.5%. Assumed Av and V(VC) as ture values, the deviations of the new method could be attributed to irregularity of the crowns' silhouettes. Different morphological characteristics of tree crown led to measurement error in forest simple plot survey. Based on the results, the paper proposes that: (1) the use of eight-point or sixteen-point projection with

  7. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    SciTech Connect

    2012-05-31

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  8. CD4 Count Outperforms World Health Organization Clinical Algorithm for Point-of Care HIV Diagnosis among Hospitalized HIV-exposed Malawian Infants

    PubMed Central

    Maliwichi, Madalitso; Rosenberg, Nora E.; Macfie, Rebekah; Olson, Dan; Hoffman, Irving; van der Horst, Charles M.; Kazembe, Peter N.; Hosseinipour, Mina C.; McCollum, Eric D.

    2014-01-01

    Objective To determine, for the WHO algorithm for point-of-care diagnosis of HIV infection, the agreement levels between pediatricians and non-physician clinicians, and to compare sensitivity and specificity profiles of the WHO algorithm and different CD4 thresholds against HIV PCR testing in hospitalized Malawian infants. Methods In 2011, hospitalized HIV-exposed infants <12 months in Lilongwe, Malawi were evaluated independently with the WHO algorithm by both a pediatrician and clinical officer. Blood was collected for CD4 and molecular HIV testing (DNA or RNA PCR). Using molecular testing as the reference, sensitivity, specificity, and positive predictive value (PPV) were determined for the WHO algorithm and CD4 count thresholds of 1500 and 2000 cells/mm3 by pediatricians and clinical officers. Results We enrolled 166 infants (50% female, 34% <2 months, 37% HIV-infected). Sensitivity was higher using CD4 thresholds (<1500, 80%; <2000, 95%) than with the algorithm (physicians, 57%; clinical officers, 71%). Specificity was comparable for CD4 thresholds (<1500, 68%, <2000, 50%) and the algorithm (pediatricians, 55%, clinical officers, 50%). The positive predictive values were slightly better using CD4 thresholds (<1500, 59%, <2000, 52%) than the algorithm (pediatricians, 43%, clinical officers 45%) at this prevalence. Conclusion Performance by the WHO algorithm and CD4 thresholds resulted in many misclassifications. Point-of-care CD4 thresholds of <1500 cells/mm3 or <2000 cells/mm3 could identify more HIV-infected infants with fewer false positives than the algorithm. However, a point-of-care option with better performance characteristics is needed for accurate, timely HIV diagnosis. PMID:24754543

  9. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-01

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. PMID:25345784

  10. Bridging Proper Orthogonal Decomposition methods and augmented Newton-Krylov algorithms: an adaptive model order reduction for highly nonlinear mechanical problems

    PubMed Central

    Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.

    2013-01-01

    This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688

  11. Fast HPLC-DAD quantification of nine polyphenols in honey by using second-order calibration method based on trilinear decomposition algorithm.

    PubMed

    Zhang, Xiao-Hua; Wu, Hai-Long; Wang, Jian-Yao; Tu, De-Zhu; Kang, Chao; Zhao, Juan; Chen, Yao; Miu, Xiao-Xia; Yu, Ru-Qin

    2013-05-01

    This paper describes the use of second-order calibration for development of HPLC-DAD method to quantify nine polyphenols in five kinds of honey samples. The sample treatment procedure was simplified effectively relative to the traditional ways. Baselines drift was also overcome by means of regarding the drift as additional factor(s) as well as the analytes of interest in the mathematical model. The contents of polyphenols obtained by the alternating trilinear decomposition (ATLD) method have been successfully used to distinguish different types of honey. This method shows good linearity (r>0.99), rapidity (t<7.60 min) and accuracy, which may be extremely promising as an excellent routine strategy for identification and quantification of polyphenols in the complex matrices.

  12. Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems

    SciTech Connect

    O'Leary, Dianne P.; Tits, Andre

    2014-04-03

    Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.

  13. TRIANGLE-SHAPED DC CORONA DISCHARGE DEVICE FOR MOLECULAR DECOMPOSITION

    EPA Science Inventory

    The paper discusses the evaluation of electrostatic DC corona discharge devices for the application of molecular decomposition. A point-to-plane geometry corona device with a rectangular cross section demonstrated low decomposition efficiencies in earlier experimental work. The n...

  14. Revisiting the layout decomposition problem for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Kahng, Andrew B.; Park, Chul-Hong; Xu, Xu; Yao, Hailong

    2008-10-01

    In double patterning lithography (DPL) layout decomposition for 45nm and below process nodes, two features must be assigned opposite colors (corresponding to different exposures) if their spacing is less than the minimum coloring spacing.5, 11, 14 However, there exist pattern configurations for which pattern features separated by less than the minimum coloring spacing cannot be assigned different colors. In such cases, DPL requires that a layout feature be split into two parts. We address this problem using a layout decomposition algorithm that incorporates integer linear programming (ILP), phase conflict detection (PCD), and node-deletion bipartization (NDB) methods. We evaluate our approach on both real-world and artificially generated testcases in 45nm technology. Experimental results show that our proposed layout decomposition method effectively decomposes given layouts to satisfy the key goals of minimized line-ends and maximized overlap margin. There are no design rule violations in the final decomposed layout. While we have previously reported other facets of our research on DPL pattern decomposition,6 the present paper differs from that work in the following key respects: (1) instead of detecting conflict cycles and splitting nodes in conflict cycles to achieve graph bipartization,6 we split all nodes of the conflict graph at all feasible dividing points and then formulate a problem of bipartization by ILP, PCD8 and NDB9 methods; and (2) instead of reporting unresolvable conflict cycles, we report the number of deleted conflict edges to more accurately capture the needed design changes in the experimental results.

  15. Convergence Analysis of a Domain Decomposition Paradigm

    SciTech Connect

    Bank, R E; Vassilevski, P S

    2006-06-12

    We describe a domain decomposition algorithm for use in several variants of the parallel adaptive meshing paradigm of Bank and Holst. This algorithm has low communication, makes extensive use of existing sequential solvers, and exploits in several important ways data generated as part of the adaptive meshing paradigm. We show that for an idealized version of the algorithm, the rate of convergence is independent of both the global problem size N and the number of subdomains p used in the domain decomposition partition. Numerical examples illustrate the effectiveness of the procedure.

  16. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore

  17. A double-loop structure in the adaptive generalized predictive control algorithm for control of robot end-point contact force.

    PubMed

    Wen, Shuhuan; Zhu, Jinghai; Li, Xiaoli; Chen, Shengyong

    2014-09-01

    Robot force control is an essential issue in robotic intelligence. There is much high uncertainty when robot end-effector contacts with the environment. Because of the environment stiffness effects on the system of the robot end-effector contact with environment, the adaptive generalized predictive control algorithm based on quantitative feedback theory is designed for robot end-point contact force system. The controller of the internal loop is designed on the foundation of QFT to control the uncertainty of the system. An adaptive GPC algorithm is used to design external loop controller to improve the performance and the robustness of the system. Two closed loops used in the design approach realize the system׳s performance and improve the robustness. The simulation results show that the algorithm of the robot end-effector contacting force control system is effective. PMID:24973336

  18. A double-loop structure in the adaptive generalized predictive control algorithm for control of robot end-point contact force.

    PubMed

    Wen, Shuhuan; Zhu, Jinghai; Li, Xiaoli; Chen, Shengyong

    2014-09-01

    Robot force control is an essential issue in robotic intelligence. There is much high uncertainty when robot end-effector contacts with the environment. Because of the environment stiffness effects on the system of the robot end-effector contact with environment, the adaptive generalized predictive control algorithm based on quantitative feedback theory is designed for robot end-point contact force system. The controller of the internal loop is designed on the foundation of QFT to control the uncertainty of the system. An adaptive GPC algorithm is used to design external loop controller to improve the performance and the robustness of the system. Two closed loops used in the design approach realize the system׳s performance and improve the robustness. The simulation results show that the algorithm of the robot end-effector contacting force control system is effective.

  19. A domain decomposition approach to finite volume solutions of the Euler equations on unstructured triangular meshes

    NASA Astrophysics Data System (ADS)

    Dolean, Victoria; Lanteri, Stéphane

    2001-11-01

    We report on our recent efforts on the formulation and the evaluation of a domain decomposition algorithm for the parallel solution of two-dimensional compressible inviscid flows. The starting point is a flow solver for the Euler equations, which is based on a mixed finite element/finite volume formulation on unstructured triangular meshes. Time integration of the resulting semi-discrete equations is obtained using a linearized backward Euler implicit scheme. As a result, each pseudo-time step requires the solution of a sparse linear system for the flow variables. In this study, a non-overlapping domain decomposition algorithm is used for advancing the solution at each implicit time step. First, we formulate an additive Schwarz algorithm using appropriate matching conditions at the subdomain interfaces. In accordance with the hyperbolic nature of the Euler equations, these transmission conditions are Dirichlet conditions for the characteristic variables corresponding to incoming waves. Then, we introduce interface operators that allow us to express the domain decomposition algorithm as a Richardson-type iteration on the interface unknowns. Algebraically speaking, the Schwarz algorithm is equivalent to a Jacobi iteration applied to a linear system whose matrix has a block structure. A substructuring technique can be applied to this matrix in order to obtain a fully implicit scheme in terms of interface unknowns. In our approach, the interface unknowns are numerical (normal) fluxes. Copyright

  20. Critical analysis of nitramine decomposition data: Activation energies and frequency factors for HMX and RDX decomposition

    NASA Technical Reports Server (NTRS)

    Schroeder, M. A.

    1980-01-01

    A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.

  1. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  2. A new damping factor algorithm based on line search of the local minimum point for inverse approach

    NASA Astrophysics Data System (ADS)

    Zhang, Yaqi; Liu, Weijie; Lu, Fang; Zhang, Xiangkui; Hu, Ping

    2013-05-01

    The influence of damping factor on the convergence and computational efficiency of the inverse approach was studied through a series of practical examples. A new selection algorithm of the damping (relaxation) factor which takes into account of both robustness and calculation efficiency is proposed, then the computer program is implemented and tested on Siemens PLM NX | One-Step. The result is compared with the traditional Armijo rule through six examples such as U-beam, square box and cylindrical cup et al, confirming the effectiveness of proposed algorithm.

  3. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  4. Domain decomposition for the SPN solver MINOS

    SciTech Connect

    Jamelot, Erell; Baudron, Anne-Marie; Lautard, Jean-Jacques

    2012-07-01

    In this article we present a domain decomposition method for the mixed SPN equations, discretized with Raviart-Thomas-Nedelec finite elements. This domain decomposition is based on the iterative Schwarz algorithm with Robin interface conditions to handle communications. After having described this method, we give details on how to optimize the convergence. Finally, we give some numerical results computed in a realistic 3D domain. The computations are done with the MINOS solver of the APOLLO3 (R) code. (authors)

  5. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  6. FAST TRACK PAPER: Receiver function decomposition of OBC data: theory

    NASA Astrophysics Data System (ADS)

    Edme, Pascal; Singh, Satish C.

    2009-06-01

    This paper deals with theoretical aspects of wavefield decomposition of Ocean Bottom Cable (OBC) data in the τ-p domain, considering a horizontally layered medium. We present both the acoustic decomposition and elastic decomposition procedures in a simple and compatible way. Acoustic decomposition aims at estimating the primary upgoing P wavefield just above the ocean-bottom, whereas elastic decomposition aims at estimating the primary upgoing P and S wavefields just below the ocean-bottom. Specific issues due to the interference phenomena at the receiver level are considered. Our motivation is to introduce the two-step decomposition scheme called `receiver function' (RF) decomposition that aims at determining the primary upgoing P and S wavefields (RFP and RFS, free of any water layer multiples). We show that elastic decomposition is a necessary step (acting as pre-conditioning) before applying the multiple removal step by predictive deconvolution. We show the applicability of our algorithm on a synthetic data example.

  7. Adaptive neuro-fuzzy inference system multi-objective optimization using the genetic algorithm/singular value decomposition method for modelling the discharge coefficient in rectangular sharp-crested side weirs

    NASA Astrophysics Data System (ADS)

    Khoshbin, Fatemeh; Bonakdari, Hossein; Hamed Ashraf Talesh, Seyed; Ebtehaj, Isa; Zaji, Amir Hossein; Azimi, Hamed

    2016-06-01

    In the present article, the adaptive neuro-fuzzy inference system (ANFIS) is employed to model the discharge coefficient in rectangular sharp-crested side weirs. The genetic algorithm (GA) is used for the optimum selection of membership functions, while the singular value decomposition (SVD) method helps in computing the linear parameters of the ANFIS results section (GA/SVD-ANFIS). The effect of each dimensionless parameter on discharge coefficient prediction is examined in five different models to conduct sensitivity analysis by applying the above-mentioned dimensionless parameters. Two different sets of experimental data are utilized to examine the models and obtain the best model. The study results indicate that the model designed through GA/SVD-ANFIS predicts the discharge coefficient with a good level of accuracy (mean absolute percentage error = 3.362 and root mean square error = 0.027). Moreover, comparing this method with existing equations and the multi-layer perceptron-artificial neural network (MLP-ANN) indicates that the GA/SVD-ANFIS method has superior performance in simulating the discharge coefficient of side weirs.

  8. Quantitative analysis of triazine herbicides in environmental samples by using high performance liquid chromatography and diode array detection combined with second-order calibration based on an alternating penalty trilinear decomposition algorithm.

    PubMed

    Li, Yuan-Na; Wu, Hai-Long; Qing, Xiang-Dong; Li, Quan; Li, Shu-Fang; Fu, Hai-Yan; Yu, Yong-Jie; Yu, Ru-Qin

    2010-09-23

    A novel application of second-order calibration method based on an alternating penalty trilinear decomposition (APTLD) algorithm is presented to treat the data from high performance liquid chromatography-diode array detection (HPLC-DAD). The method makes it possible to accurately and reliably analyze atrazine (ATR), ametryn (AME) and prometryne (PRO) contents in soil, river sediment and wastewater samples. Satisfactory results are obtained although the elution and spectral profiles of the analytes are heavily overlapped with the background in environmental samples. The obtained average recoveries for ATR, AME and PRO are 99.7±1.5, 98.4±4.7 and 97.0±4.4% in soil samples, 100.1±3.2, 100.7±3.4 and 96.4±3.8% in river sediment samples, and 100.1±3.5, 101.8±4.2 and 101.4±3.6% in wastewater samples, respectively. Furthermore, the accuracy and precision of the proposed method are evaluated with the elliptical joint confidence region (EJCR) test. It lights a new avenue to determine quantitatively herbicides in environmental samples with a simple pretreatment procedure and provides the scientific basis for an improved environment management through a better understanding of the wastewater-soil-river sediment system as a whole.

  9. Voxelization algorithms for geospatial applications: Computational methods for voxelating spatial datasets of 3D city models containing 3D surface, curve and point data models.

    PubMed

    Nourian, Pirouz; Gonçalves, Romulo; Zlatanova, Sisi; Ohori, Ken Arroyo; Vu Vo, Anh

    2016-01-01

    Voxel representations have been used for years in scientific computation and medical imaging. The main focus of our research is to provide easy access to methods for making large-scale voxel models of built environment for environmental modelling studies while ensuring they are spatially correct, meaning they correctly represent topological and semantic relations among objects. In this article, we present algorithms that generate voxels (volumetric pixels) out of point cloud, curve, or surface objects. The algorithms for voxelization of surfaces and curves are a customization of the topological voxelization approach [1]; we additionally provide an extension of this method for voxelization of point clouds. The developed software has the following advantages:•It provides easy management of connectivity levels in the resulting voxels.•It is not dependant on any external library except for primitive types and constructs; therefore, it is easy to integrate them in any application.•One of the algorithms is implemented in C++ and C for platform independence and efficiency.

  10. Voxelization algorithms for geospatial applications: Computational methods for voxelating spatial datasets of 3D city models containing 3D surface, curve and point data models.

    PubMed

    Nourian, Pirouz; Gonçalves, Romulo; Zlatanova, Sisi; Ohori, Ken Arroyo; Vu Vo, Anh

    2016-01-01

    Voxel representations have been used for years in scientific computation and medical imaging. The main focus of our research is to provide easy access to methods for making large-scale voxel models of built environment for environmental modelling studies while ensuring they are spatially correct, meaning they correctly represent topological and semantic relations among objects. In this article, we present algorithms that generate voxels (volumetric pixels) out of point cloud, curve, or surface objects. The algorithms for voxelization of surfaces and curves are a customization of the topological voxelization approach [1]; we additionally provide an extension of this method for voxelization of point clouds. The developed software has the following advantages:•It provides easy management of connectivity levels in the resulting voxels.•It is not dependant on any external library except for primitive types and constructs; therefore, it is easy to integrate them in any application.•One of the algorithms is implemented in C++ and C for platform independence and efficiency. PMID:27408832

  11. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

    NASA Astrophysics Data System (ADS)

    Siegel, J.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!

  12. Proper orthogonal decomposition of flow-field in non-stationary geometry

    NASA Astrophysics Data System (ADS)

    Troshin, Victor; Seifert, Avi; Sidilkover, David; Tadmor, Gilead

    2016-04-01

    The current paper outlines a proper orthogonal decomposition (POD) methodology for a flow field in a domain with moving boundaries. In the standard POD approach the properties of the region of the domain, which alternatingly occupied by fluid and solid, are not defined. Here, prior to the decomposition, the domain with moving or deforming boundaries is mapped to a stationary domain using volume preserving mapping. This mapping was created by combining a transfinite interpolation and volume adjustment algorithm. The algorithm is based on an iterative solution of the Laplace equation with respect to the displacement potential of the grid points. Finally the method is demonstrated on CFD results of pitching and plunging ellipse in still fluid.

  13. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov–Galerkin method

    PubMed Central

    Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.

    2014-01-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358

  14. Algorithms for Collision Detection Between a Point and a Moving Polygon, with Applications to Aircraft Weather Avoidance

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Hagen, George

    2016-01-01

    This paper proposes mathematical definitions of functions that can be used to detect future collisions between a point and a moving polygon. The intended application is weather avoidance, where the given point represents an aircraft and bounding polygons are chosen to model regions with bad weather. Other applications could possibly include avoiding other moving obstacles. The motivation for the functions presented here is safety, and therefore they have been proved to be mathematically correct. The functions are being developed for inclusion in NASA's Stratway software tool, which allows low-fidelity air traffic management concepts to be easily prototyped and quickly tested.

  15. Improvement of registration accuracy in accelerated partial breast irradiation using the point-based rigid-body registration algorithm for patients with implanted fiducial markers

    SciTech Connect

    Inoue, Minoru; Yoshimura, Michio Sato, Sayaka; Nakamura, Mitsuhiro; Yamada, Masahiro; Hirata, Kimiko; Ogura, Masakazu; Hiraoka, Masahiro; Sasaki, Makoto; Fujimoto, Takahiro

    2015-04-15

    Purpose: To investigate image-registration errors when using fiducial markers with a manual method and the point-based rigid-body registration (PRBR) algorithm in accelerated partial breast irradiation (APBI) patients, with accompanying fiducial deviations. Methods: Twenty-two consecutive patients were enrolled in a prospective trial examining 10-fraction APBI. Titanium clips were implanted intraoperatively around the seroma in all patients. For image-registration, the positions of the clips in daily kV x-ray images were matched to those in the planning digitally reconstructed radiographs. Fiducial and gravity registration errors (FREs and GREs, respectively), representing resulting misalignments of the edge and center of the target, respectively, were compared between the manual and algorithm-based methods. Results: In total, 218 fractions were evaluated. Although the mean FRE/GRE values for the manual and algorithm-based methods were within 3 mm (2.3/1.7 and 1.3/0.4 mm, respectively), the percentages of fractions where FRE/GRE exceeded 3 mm using the manual and algorithm-based methods were 18.8%/7.3% and 0%/0%, respectively. Manual registration resulted in 18.6% of patients with fractions of FRE/GRE exceeding 5 mm. The patients with larger clip deviation had significantly more fractions showing large FRE/GRE using manual registration. Conclusions: For image-registration using fiducial markers in APBI, the manual registration results in more fractions with considerable registration error due to loss of fiducial objectivity resulting from their deviation. The authors recommend the PRBR algorithm as a safe and effective strategy for accurate, image-guided registration and PTV margin reduction.

  16. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  17. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  18. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  19. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  20. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  1. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm.

    PubMed

    Tehrani, Joubin Nasehi; O'Brien, Ricky T; Poulsen, Per Rugaard; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  2. A new eddy-covariance method using empirical mode decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We introduce a new eddy-covariance method that uses a spectral decomposition algorithm called empirical mode decomposition. The technique is able to calculate contributions to near-surface fluxes from different periodic components. Unlike traditional Fourier methods, this method allows for non-ortho...

  3. 3D shape decomposition and comparison for gallbladder modeling

    NASA Astrophysics Data System (ADS)

    Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen

    2011-03-01

    This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.

  4. Autonomous Gaussian Decomposition

    NASA Astrophysics Data System (ADS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-04-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  5. AUTONOMOUS GAUSSIAN DECOMPOSITION

    SciTech Connect

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Dickey, John

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  6. Error reduction in EMG signal decomposition.

    PubMed

    Kline, Joshua C; De Luca, Carlo J

    2014-12-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization.

  7. Image encryption using P-Fibonacci transform and decomposition

    NASA Astrophysics Data System (ADS)

    Zhou, Yicong; Panetta, Karen; Agaian, Sos; Chen, C. L. Philip

    2012-03-01

    Image encryption is an effective method to protect images or videos by transferring them into unrecognizable formats for different security purposes. To improve the security level of bit-plane decomposition based encryption approaches, this paper introduces a new image encryption algorithm by using a combination of parametric bit-plane decomposition along with bit-plane shuffling and resizing, pixel scrambling and data mapping. The algorithm utilizes the Fibonacci P-code for image bit-plane decomposition and the 2D P-Fibonacci transform for image encryption because they are parameter dependent. Any new or existing method can be used for shuffling the order of the bit-planes. Simulation analysis and comparisons are provided to demonstrate the algorithm's performance for image encryption. Security analysis shows the algorithm's ability against several common attacks. The algorithm can be used to encrypt images, biometrics and videos.

  8. Decomposition of Sodium Tetraphenylborate

    SciTech Connect

    Barnes, M.J.

    1998-11-20

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability.

  9. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  10. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  11. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download. PMID:26465549

  12. Turning Tangent Empirical Mode Decomposition: A Framework for Mono- and Multivariate Signals

    PubMed Central

    Fleureau, Julien; Nunes, Jean-Claude; Kachenoura, Amar; Albera, Laurent; Senhadji, Lotfi

    2011-01-01

    A novel Empirical Mode Decomposition (EMD) algorithm, called 2T-EMD, for both mono- and multivariate signals is proposed in this paper. It differs from the other approaches by its computational lightness and its algorithmic simplicity. The method is essentially based on a redefinition of the signal mean envelope, computed thanks to new characteristic points, which offers the possibility to decompose multivariate signals without any projection. The scope of application of the novel algorithm is specified, and a comparison of the 2T-EMD technique with classical methods is performed on various simulated mono- and multivariate signals. The monovariate behaviour of the proposed method on noisy signals is then validated by decomposing a fractional Gaussian noise and an application to real life EEG data is finally presented. PMID:22003273

  13. Analysis and Application of LIDAR Waveform Data Using a Progressive Waveform Decomposition Method

    NASA Astrophysics Data System (ADS)

    Zhu, J.; Zhang, Z.; Hu, X.; Li, Z.

    2011-09-01

    Due to rich information of a full waveform of airborne LiDAR (light detection and ranging) data, the analysis of full waveform has been an active area in LiDAR application. It is possible to digitally sample and store the entire reflected waveform of small-footprint instead of only discrete point clouds. Decomposition of waveform data, a key step in waveform data analysis, can be categorized to two typical methods: 1) the Gaussian modelling method such as the Non-linear least-squares (NLS) algorithm and the maximum likelihood estimation using the Exception Maximization (EM) algorithm. 2) pulse detection method—Average Square Difference Function (ASDF). However, the Gaussian modelling methods strongly rely on initial parameters, whereas the ASDF omits the importance of parameter information of the waveform. In this paper, we proposed a fast algorithm—Progressive Waveform Decomposition (PWD) method to extract local maxims and fit the echo with Gaussian function, and calculate other parameters from the raw waveform data. On the one hand, experiments are implemented to evaluate the PWD method and the results demonstrate its robustness and efficiency. On the other hand, with the PWD parametric analysis of the full-waveform instead of a 3D point cloud, some special applications are investigated afterward.

  14. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  15. Orthogonal tensor decompositions

    SciTech Connect

    Tamara G. Kolda

    2000-03-01

    The authors explore the orthogonal decomposition of tensors (also known as multi-dimensional arrays or n-way arrays) using two different definitions of orthogonality. They present numerous examples to illustrate the difficulties in understanding such decompositions. They conclude with a counterexample to a tensor extension of the Eckart-Young SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl. 269(1998):307--329].

  16. A parallel Householder tridiagonalization stratagem using scattered square decomposition

    NASA Technical Reports Server (NTRS)

    Chang, H. Y.; Utku, S.; Salama, M.; Drapp, D.

    1988-01-01

    The parallel stratagem in this paper uses scattered square decomposition, introduced by Fox (1985), for its data assignment and then exploits parallelism in the solution steps of the sequential Householder tridiagonalization algorithm. One may condense a real symmetric full matrix A of order n into a tridiagonal form by the stratagem in concurrent machines where N(=D-squared) processors are used. Expressions for efficiency and speedup are given for the evaluation of the stratagem. An alternative stratagem which requires less data transmission but more computations is also discussed. The results shown that the Householder method of tridiagonalization may be implemented on a concurrent machine efficiently by scattered square decomposition provided that the number of matrix elements contained in each processor is much larger than the number of processors of the concurrent machine, and the ratio of the time to transmit one data item from one processor to any other processor to the time to perform a floating-point arithmetic operation is small enough.

  17. Parallel CE/SE Computations via Domain Decomposition

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung

    2000-01-01

    This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.

  18. Analyzing algorithms for nonlinear and spatially nonuniform phase shifts in the liquid crystal point diffraction interferometer. 1998 summer research program for high school juniors at the University of Rochester`s Laboratory for Laser Energetics: Student research reports

    SciTech Connect

    Jain, N.

    1999-03-01

    Phase-shifting interferometry has many advantages, and the phase shifting nature of the Liquid Crystal Point Diffraction Interferometer (LCPDI) promises to provide significant improvement over other current OMEGA wavefront sensors. However, while phase-shifting capabilities improve its accuracy as an interferometer, phase-shifting itself introduces errors. Phase-shifting algorithms are designed to eliminate certain types of phase-shift errors, and it is important to chose an algorithm that is best suited for use with the LCPDI. Using polarization microscopy, the authors have observed a correlation between LC alignment around the microsphere and fringe behavior. After designing a procedure to compare phase-shifting algorithms, they were able to predict the accuracy of two particular algorithms through computer modeling of device-specific phase shift-errors.

  19. Mueller matrix differential decomposition.

    PubMed

    Ortega-Quijano, Noé; Arce-Diego, José Luis

    2011-05-15

    We present a Mueller matrix decomposition based on the differential formulation of the Mueller calculus. The differential Mueller matrix is obtained from the macroscopic matrix through an eigenanalysis. It is subsequently resolved into the complete set of 16 differential matrices that correspond to the basic types of optical behavior for depolarizing anisotropic media. The method is successfully applied to the polarimetric analysis of several samples. The differential parameters enable one to perform an exhaustive characterization of anisotropy and depolarization. This decomposition is particularly appropriate for studying media in which several polarization effects take place simultaneously. PMID:21593943

  20. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  1. Hydrazine decomposition and other reactions

    NASA Technical Reports Server (NTRS)

    Armstrong, Warren E. (Inventor); La France, Donald S. (Inventor); Voge, Hervey H. (Inventor)

    1978-01-01

    This invention relates to the catalytic decomposition of hydrazine, catalysts useful for this decomposition and other reactions, and to reactions in hydrogen atmospheres generally using carbon-containing catalysts.

  2. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  3. Hierarchical decomposition model for reconfigurable architecture

    NASA Astrophysics Data System (ADS)

    Erdogan, Simsek; Wahab, Abdul

    1996-10-01

    This paper introduces a systematic approach for abstract modeling of VLSI digital systems using a hierarchical decomposition process and HDL. In particular, the modeling of the back propagation neural network on a massively parallel reconfigurable hardware is used to illustrate the design process rather than toy examples. Based on the design specification of the algorithm, a functional model is developed through successive refinement and decomposition for execution on the reconfiguration machine. First, a top- level block diagram of the system is derived. Then, a schematic sheet of the corresponding structural model is developed to show the interconnections of the main functional building blocks. Next, the functional blocks are decomposed iteratively as required. Finally, the blocks are modeled using HDL and verified against the block specifications.

  4. Tensor decomposition in potential energy surface representations.

    PubMed

    Ostrowski, Lukas; Ziegler, Benjamin; Rauhut, Guntram

    2016-09-14

    In order to reduce the operation count in vibration correlation methods, e.g., vibrational configuration interaction (VCI) theory, a tensor decomposition approach has been applied to the analytical representations of multidimensional potential energy surfaces (PESs). It is shown that a decomposition of the coefficients within the individual n-mode coupling terms in a multimode expansion of the PES is feasible and allows for convenient contractions of one-dimensional integrals with these newly determined factor matrices. Deviations in the final VCI frequencies of a set of small molecules were found to be negligible once the rank of the factors matrices is chosen appropriately. Recommendations for meaningful ranks are provided and different algorithms are discussed. PMID:27634247

  5. Volume Decomposition and Feature Recognition for Hexahedral Mesh Generation

    SciTech Connect

    GADH,RAJIT; LU,YONG; TAUTGES,TIMOTHY J.

    1999-09-27

    Considerable progress has been made on automatic hexahedral mesh generation in recent years. Several automatic meshing algorithms have proven to be very reliable on certain classes of geometry. While it is always worth pursuing general algorithms viable on more general geometry, a combination of the well-established algorithms is ready to take on classes of complicated geometry. By partitioning the entire geometry into meshable pieces matched with appropriate meshing algorithm the original geometry becomes meshable and may achieve better mesh quality. Each meshable portion is recognized as a meshing feature. This paper, which is a part of the feature based meshing methodology, presents the work on shape recognition and volume decomposition to automatically decompose a CAD model into meshable volumes. There are four phases in this approach: (1) Feature Determination to extinct decomposition features, (2) Cutting Surfaces Generation to form the ''tailored'' cutting surfaces, (3) Body Decomposition to get the imprinted volumes; and (4) Meshing Algorithm Assignment to match volumes decomposed with appropriate meshing algorithms. The feature determination procedure is based on the CLoop feature recognition algorithm that is extended to be more general. Results are demonstrated over several parts with complicated topology and geometry.

  6. Embedding color watermarks in color images based on Schur decomposition

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a blind dual color image watermarking scheme based on Schur decomposition is introduced. This is the first time to use Schur decomposition to embed color image watermark in color host image, which is different from using the binary image as watermark. By analyzing the 4 × 4 unitary matrix U via Schur decomposition, we can find that there is a strong correlation between the second row first column element and the third row first column element. This property can be explored for embedding watermark and extracting watermark in the blind manner. Since Schur decomposition is an intermediate step in SVD decomposition, the proposed method requires less number of computations. Experimental results show that the proposed scheme is robust against most common attacks including JPEG lossy compression, JPEG 2000 compression, low-pass filtering, cropping, noise addition, blurring, rotation, scaling and sharpening et al. Moreover, the proposed algorithm outperforms the closely related SVD-based algorithm and the spatial-domain algorithm.

  7. Minimax eigenvector decomposition for data hiding

    NASA Astrophysics Data System (ADS)

    Davidson, Jennifer

    2005-09-01

    Steganography is the study of hiding information within a covert channel in order to transmit a secret message. Any public media such as image data, audio data, or even file packets, can be used as a covert channel. This paper presents an embedding algorithm that hides a message in an image using a technique based on a nonlinear matrix transform called the minimax eigenvector decomposition (MED). The MED is a minimax algebra version of the well-known singular value decomposition (SVD). Minimax algebra is a matrix algebra based on the algebraic operations of maximum and addition, developed initially for use in operations research and extended later to represent a class of nonlinear image processing operations. The discrete mathematical morphology operations of dilation and erosion, for example, are contained within minimax algebra. The MED is much quicker to compute than the SVD and avoids the numerical computational issues of the SVD because the operations involved only integer addition, subtraction, and compare. We present the algorithm to embed data using the MED, show examples applied to image data, and discuss limitations and advantages as compared with another similar algorithm.

  8. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  9. Lidar signal de-noising by singular value decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Huanxue; Liu, Jianguo; Zhang, Tianshu

    2014-11-01

    Signal de-noising remains an important problem in lidar signal processing. This paper presents a de-noising method based on singular value decomposition. Experimental results on lidar simulated signal and real signal show that the proposed algorithm not only improves the signal-to-noise ratio effectively, but also preserves more detail information.

  10. Spectral decomposition of a matrix using the generalized sign matrix

    NASA Technical Reports Server (NTRS)

    Denman, E. D.; Leyva-Ramos, J.

    1981-01-01

    An algorithm for spectral decomposition is presented which does not require knowledge of eigenvalues and eigenvectors. A set of eigenprojectors are defined which covers the entire spectrum of a matrix, and special attention is given to the projection on the zero eigenvalue. Some useful applications are discussed in the paper.

  11. Iterative image-domain decomposition for dual-energy CT

    SciTech Connect

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the

  12. Algorithms for propagating uncertainty across heterogeneous domains

    SciTech Connect

    Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.

    2015-12-30

    We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.

  13. A fast, space-efficient average-case algorithm for the 'Greedy' Triangulation of a point set, and a proof that the Greedy Triangulation is not approximately optimal

    NASA Technical Reports Server (NTRS)

    Manacher, G. K.; Zobrist, A. L.

    1979-01-01

    The paper addresses the problem of how to find the Greedy Triangulation (GT) efficiently in the average case. It is noted that the problem is open whether there exists an efficient approximation algorithm to the Optimum Triangulation. It is first shown how in the worst case, the GT may be obtained in time O(n to the 3) and space O(n). Attention is then given to how the algorithm may be slightly modified to produce a time O(n to the 2), space O(n) solution in the average case. Finally, it is mentioned that Gilbert has found a worst case solution using totally different techniques that require space O(n to the 2) and time O(n to the 2 log n).

  14. Hydrogen peroxide catalytic decomposition

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2010-01-01

    Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous process preceding decomposition in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation.

  15. Mode decomposition evolution equations

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be

  16. Low complexity interference alignment algorithms for desired signal power maximization problem of MIMO channels

    NASA Astrophysics Data System (ADS)

    Sun, Cong; Yang, Yunchuan; Yuan, Yaxiang

    2012-12-01

    In this article, we investigate the interference alignment (IA) solution for a K-user MIMO interference channel. Proper users' precoders and decoders are designed through a desired signal power maximization model with IA conditions as constraints, which forms a complex matrix optimization problem. We propose two low complexity algorithms, both of which apply the Courant penalty function technique to combine the leakage interference and the desired signal power together as the new objective function. The first proposed algorithm is the modified alternating minimization algorithm (MAMA), where each subproblem has closed-form solution with an eigenvalue decomposition. To further reduce algorithm complexity, we propose a hybrid algorithm which consists of two parts. As the first part, the algorithm iterates with Householder transformation to preserve the orthogonality of precoders and decoders. In each iteration, the matrix optimization problem is considered in a sequence of 2D subspaces, which leads to one dimensional optimization subproblems. From any initial point, this algorithm obtains precoders and decoders with low leakage interference in short time. In the second part, to exploit the advantage of MAMA, it continues to iterate to perfectly align the interference from the output point of the first part. Analysis shows that in one iteration generally both proposed two algorithms have lower computational complexity than the existed maximum signal power (MSP) algorithm, and the hybrid algorithm enjoys lower complexity than MAMA. Simulations reveal that both proposed algorithms achieve similar performances as the MSP algorithm with less executing time, and show better performances than the existed alternating minimization algorithm in terms of sum rate. Besides, from the view of convergence rate, simulation results show that the MAMA enjoys fastest speed with respect to a certain sum rate value, while hybrid algorithm converges fastest to eliminate interference.

  17. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  18. 3D building reconstruction from ALS data using unambiguous decomposition into elementary structures

    NASA Astrophysics Data System (ADS)

    Jarząbek-Rychard, M.; Borkowski, A.

    2016-08-01

    The objective of the paper is to develop an automated method that enables for the recognition and semantic interpretation of topological building structures. The novelty of the proposed modeling approach is an unambiguous decomposition of complex objects into predefined simple parametric structures, resulting in the reconstruction of one topological unit without independent overlapping elements. The aim of a data processing chain is to generate complete polyhedral models at LOD2 with an explicit topological structure and semantic information. The algorithms are performed on 3D point clouds acquired by airborne laser scanning. The presented methodology combines data-based information reflected in an attributed roof topology graph with common knowledge about buildings stored in a library of elementary structures. In order to achieve an appropriate balance between reconstruction precision and visualization aspects, the implemented library contains a set of structure-depended soft modeling rules instead of strictly defined geometric primitives. The proposed modeling algorithm starts with roof plane extraction performed by the segmentation of building point clouds, followed by topology identification and recognition of predefined structures. We evaluate the performance of the novel procedure by the analysis of the modeling accuracy and the degree of modeling detail. The assessment according to the validation methods standardized by the International Society for Photogrammetry and Remote Sensing shows that the completeness of the algorithm is above 80%, whereas the correctness exceeds 98%.

  19. Hydrogen iodide decomposition

    DOEpatents

    O'Keefe, Dennis R.; Norman, John H.

    1983-01-01

    Liquid hydrogen iodide is decomposed to form hydrogen and iodine in the presence of water using a soluble catalyst. Decomposition is carried out at a temperature between about 350.degree. K. and about 525.degree. K. and at a corresponding pressure between about 25 and about 300 atmospheres in the presence of an aqueous solution which acts as a carrier for the homogeneous catalyst. Various halides of the platinum group metals, particularly Pd, Rh and Pt, are used, particularly the chlorides and iodides which exhibit good solubility. After separation of the H.sub.2, the stream from the decomposer is countercurrently extracted with nearly dry HI to remove I.sub.2. The wet phase contains most of the catalyst and is recycled directly to the decomposition step. The catalyst in the remaining almost dry HI-I.sub.2 phase is then extracted into a wet phase which is also recycled. The catalyst-free HI-I.sub.2 phase is finally distilled to separate the HI and I.sub.2. The HI is recycled to the reactor; the I.sub.2 is returned to a reactor operating in accordance with the Bunsen equation to create more HI.

  20. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  1. Overlapping Community Detection based on Network Decomposition.

    PubMed

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-01-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms. PMID:27066904

  2. Overlapping Community Detection based on Network Decomposition

    NASA Astrophysics Data System (ADS)

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-04-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.

  3. Overlapping Community Detection based on Network Decomposition

    PubMed Central

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-01-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms. PMID:27066904

  4. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  5. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    SciTech Connect

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  6. Erbium hydride decomposition kinetics.

    SciTech Connect

    Ferrizz, Robert Matthew

    2006-11-01

    Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.

  7. Resolving the sign ambiguity in the singular value decomposition.

    SciTech Connect

    Bro, Rasmus; Acar, Evrim; Kolda, Tamara Gibson

    2007-10-01

    Many modern data analysis methods involve computing a matrix singular value decomposition (SVD) or eigenvalue decomposition (EVD). Principal components analysis is the time-honored example, but more recent applications include latent semantic indexing, hypertext induced topic selection (HITS), clustering, classification, etc. Though the SVD and EVD are well-established and can be computed via state-of-the-art algorithms, it is not commonly mentioned that there is an intrinsic sign indeterminacy that can significantly impact the conclusions and interpretations drawn from their results. Here we provide a solution to the sign ambiguity problem and show how it leads to more sensible solutions.

  8. Direct Sum Decomposition of Groups

    ERIC Educational Resources Information Center

    Thaheem, A. B.

    2005-01-01

    Direct sum decomposition of Abelian groups appears in almost all textbooks on algebra for undergraduate students. This concept plays an important role in group theory. One simple example of this decomposition is obtained by using the kernel and range of a projection map on an Abelian group. The aim in this pedagogical note is to establish a direct…

  9. Robust Face Clustering Via Tensor Decomposition.

    PubMed

    Cao, Xiaochun; Wei, Xingxing; Han, Yahong; Lin, Dongdai

    2015-11-01

    Face clustering is a key component either in image managements or video analysis. Wild human faces vary with the poses, expressions, and illumination changes. All kinds of noises, like block occlusions, random pixel corruptions, and various disguises may also destroy the consistency of faces referring to the same person. This motivates us to develop a robust face clustering algorithm that is less sensitive to these noises. To retain the underlying structured information within facial images, we use tensors to represent faces, and then accomplish the clustering task based on the tensor data. The proposed algorithm is called robust tensor clustering (RTC), which firstly finds a lower-rank approximation of the original tensor data using a L1 norm optimization function. Because L1 norm does not exaggerate the effect of noises compared with L2 norm, the minimization of the L1 norm approximation function makes RTC robust. Then, we compute high-order singular value decomposition of this approximate tensor to obtain the final clustering results. Different from traditional algorithms solving the approximation function with a greedy strategy, we utilize a nongreedy strategy to obtain a better solution. Experiments conducted on the benchmark facial datasets and gait sequences demonstrate that RTC has better performance than the state-of-the-art clustering algorithms and is more robust to noises. PMID:25546869

  10. An Automated Three-Dimensional Detection and Segmentation Method for Touching Cells by Integrating Concave Points Clustering and Random Walker Algorithm

    PubMed Central

    Gong, Hui; Chen, Shangbin; Zhang, Bin; Ding, Wenxiang; Luo, Qingming; Li, Anan

    2014-01-01

    Characterizing cytoarchitecture is crucial for understanding brain functions and neural diseases. In neuroanatomy, it is an important task to accurately extract cell populations' centroids and contours. Recent advances have permitted imaging at single cell resolution for an entire mouse brain using the Nissl staining method. However, it is difficult to precisely segment numerous cells, especially those cells touching each other. As presented herein, we have developed an automated three-dimensional detection and segmentation method applied to the Nissl staining data, with the following two key steps: 1) concave points clustering to determine the seed points of touching cells; and 2) random walker segmentation to obtain cell contours. Also, we have evaluated the performance of our proposed method with several mouse brain datasets, which were captured with the micro-optical sectioning tomography imaging system, and the datasets include closely touching cells. Comparing with traditional detection and segmentation methods, our approach shows promising detection accuracy and high robustness. PMID:25111442

  11. Toluene and benzyl decomposition mechanisms: elementary reactions and kinetic simulations.

    PubMed

    Derudi, Marco; Polino, Daniela; Cavallotti, Carlo

    2011-12-28

    The high temperature decomposition kinetics of toluene and benzyl were investigated by combining a kinetic analysis with the ab initio/master equation study of new reaction channels. It was found that similarly to toluene, which decomposes to benzyl and phenyl losing atomic hydrogen and methyl, also benzyl decomposition proceeds through two channels with similar products. The first leads to the formation of fulvenallene and hydrogen and has already been investigated in detail in recent publications. In this work it is proposed that benzyl can decompose also through a second decomposition channel to form benzyne and methyl. The channel specific kinetic constants of benzyl decomposition were determined by integrating the RRKM/master equation over the C(7)H(7) potential energy surface. The energies of wells and saddle points were determined at the CCSD(T) level on B3LYP/6-31+G(d,p) structures. A kinetic mechanism was then formulated, which comprises the benzyl and toluene decomposition reactions together with a recently proposed fulvenallene decomposition mechanism, the decomposition kinetics of the fulvenallenyl radical, and some reactions describing the secondary chemistry originated by the decomposition products. The kinetic mechanism so obtained was used to simulate the production of H atoms measured in a wide pressure and temperature range using different experimental setups. The calculated and experimental data are in good agreement. Kinetic constants of the new reaction channels here examined are reported as a function of temperature at different pressures. The mechanism here proposed is not compatible with the assumption often used in literature kinetic mechanisms that benzyl decomposition can be effectively described through a lumped reaction whose products are the cyclopentadienyl radical and acetylene.

  12. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  13. Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.

    2001-01-01

    An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.

  14. Decomposition in northern Minnesota peatlands

    SciTech Connect

    Farrish, K.W.

    1985-01-01

    Decomposition in peatlands was investigated in northern Minnesota. Four sites, an ombrotrophic raised bog, an ombrotrophic perched bog and two groundwater minerotrophic fens, were studied. Decomposition rates of peat and paper were estimated using mass-loss techniques. Environmental and substrate factors that were most likely to be responsible for limiting decomposition were monitored. Laboratory incubation experiments complemented the field work. Mass-loss over one year in one of the bogs, ranged from 11 percent in the upper 10 cm of hummocks to 1 percent at 60 to 100 cm depth in hollows. Regression analysis of the data for that bog predicted no mass-loss below 87 cm. Decomposition estimates on an area basis were 2720 and 6460 km/ha yr for the two bogs; 17,000 and 5900 kg/ha yr for the two fens. Environmental factors found to limit decomposition in these peatlands were reducing/anaerobic conditions below the water table and cool peat temperatures. Substrate factors found to limit decomposition were low pH, high content of resistant organics such as lignin, and shortages of available N and K. Greater groundwater influence was found to favor decomposition through raising the pH and perhaps by introducing limited amounts of dissolved oxygen.

  15. A decomposition strategy for thermoeconomic optimization

    SciTech Connect

    El-Sayed, Y.M. )

    1989-09-01

    An optimal thermal design of a considered system configuration is conveniently decided when the system is modeled as made up of one thermodynamic subsystem and of the essential number of design subsystems. The thermodynamic subsystem decides the performance of the components and the design subsystems decide their best matching geometry and costs. An optimizer directs all decisions to an extremum of a given objective function. This decomposition strategy is illustrated by investigating the optimal values of seven decision design variables for a regenerative gas turbine power cycle when a cost-objective function is minimized. The results seen from the point of view of second law analysis and costing are discussed.

  16. Soil Moisture Estimation under Vegetation Applying Polarimetric Decomposition Techniques

    NASA Astrophysics Data System (ADS)

    Jagdhuber, T.; Schön, H.; Hajnsek, I.; Papathanassiou, K. P.

    2009-04-01

    Polarimetric decomposition techniques and inversion algorithms are developed and applied on the OPAQUE data set acquired in spring 2007 to investigate their potential and limitations for soil moisture estimation. A three component model-based decomposition is used together with an eigenvalue decomposition in a combined approach to invert for soil moisture over bare and vegetated soils at L-band. The applied approach indicates a feasible capability to invert soil moisture after decomposing volume and ground scattering components over agricultural land surfaces. But there are still deficiencies in modeling the volume disturbance. The results show a root mean square error below 8.5vol.-% for the winter crop fields (winter wheat, winter triticale and winter barley) and below 11.5Vol-% for the summer crop field (summer barley) whereas all fields have a distinct volume layer of 55-85cm height.

  17. Domain decomposition methods for a parallel Monte Carlo transport code

    SciTech Connect

    Alme, H J; Rodrigue, G H; Zimmerman, G B

    1999-01-27

    Achieving parallelism in simulations that use Monte Carlo transport methods presents interesting challenges. For problems that require domain decomposition, load balance can be harder to achieve. The Monte Carlo transport package may have to operate with other packages that have different optimal domain decompositions for a given problem. To examine some of these issues, we have developed a code that simulates the interaction of a laser with biological tissue; it uses a Monte Carlo method to simulate the laser and a finite element model to simulate the conduction of the temperature field in the tissue. We will present speedup and load balance results obtained for a suite of problems decomposed using a few domain decomposition algorithms we have developed.

  18. Monte Carlo simulations for spinodal decomposition

    SciTech Connect

    Sander, E.; Wanner, T.

    1999-06-01

    This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, the authors are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u{sub 0} {equivalent_to} {mu} in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u{sub 0}. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, the authors numerically compare these two mathematical approaches. In fact, they are able to synthesize the understanding they gain from the numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, they can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. The approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter {var_epsilon} of the governing equation. The authors give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. They observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. They explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. They also describe the dynamics of these exceptional solutions.

  19. Perfluoropolyalkylether decomposition on catalytic aluminas

    NASA Technical Reports Server (NTRS)

    Morales, Wilfredo

    1994-01-01

    The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25.

  20. Polar decomposition for attitude determination from vector observations

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1993-01-01

    This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.

  1. Catalyst for sodium chlorate decomposition

    NASA Technical Reports Server (NTRS)

    Wydeven, T.

    1972-01-01

    Production of oxygen by rapid decomposition of cobalt oxide and sodium chlorate mixture is discussed. Cobalt oxide serves as catalyst to accelerate reaction. Temperature conditions and chemical processes involved are described.

  2. Lignocellulose decomposition by microbial secretions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Carbon storage in terrestrial ecosystems is contingent upon the natural resistance of plant cell wall polymers to rapid biological degradation. Nevertheless, certain microorganisms have evolved remarkable means to overcome this natural resistance. Lignocellulose decomposition by microorganisms com...

  3. An Iterative Reweighted Method for Tucker Decomposition of Incomplete Tensors

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Li, Hongbin; Zeng, Bing

    2016-09-01

    We consider the problem of low-rank decomposition of incomplete multiway tensors. Since many real-world data lie on an intrinsically low dimensional subspace, tensor low-rank decomposition with missing entries has applications in many data analysis problems such as recommender systems and image inpainting. In this paper, we focus on Tucker decomposition which represents an Nth-order tensor in terms of N factor matrices and a core tensor via multilinear operations. To exploit the underlying multilinear low-rank structure in high-dimensional datasets, we propose a group-based log-sum penalty functional to place structural sparsity over the core tensor, which leads to a compact representation with smallest core tensor. The method for Tucker decomposition is developed by iteratively minimizing a surrogate function that majorizes the original objective function, which results in an iterative reweighted process. In addition, to reduce the computational complexity, an over-relaxed monotone fast iterative shrinkage-thresholding technique is adapted and embedded in the iterative reweighted process. The proposed method is able to determine the model complexity (i.e. multilinear rank) in an automatic way. Simulation results show that the proposed algorithm offers competitive performance compared with other existing algorithms.

  4. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  5. Hierarchical decomposition of metabolic networks using k-modules.

    PubMed

    Reimers, Arne C

    2015-12-01

    The optimal solutions obtained by flux balance analysis (FBA) are typically not unique. Flux modules have recently been shown to be a very useful tool to simplify and decompose the space of FBA-optimal solutions. Since yield-maximization is sometimes not the primary objective encountered in vivo, we are also interested in understanding the space of sub-optimal solutions. Unfortunately, the flux modules are too restrictive and not suited for this task. We present a generalization, called k-module, which compensates the limited applicability of flux modules to the space of sub-optimal solutions. Intuitively, a k-module is a sub-network with low connectivity to the rest of the network. Recursive application of k-modules yields a hierarchical decomposition of the metabolic network, which is also known as branch decomposition in matroid theory. In particular, decompositions computed by existing methods, like the null-space-based approach, introduced by Poolman et al. [(2007) J. Theor. Biol. 249: , 691-705] can be interpreted as branch decompositions. With k-modules we can now compare alternative decompositions of metabolic networks to the classical sub-systems of glycolysis, tricarboxylic acid (TCA) cycle, etc. They can be used to speed up algorithmic problems [theoretically shown for elementary flux modes (EFM) enumeration] and have the potential to present computational solutions in a more intuitive way independently from the classical sub-systems.

  6. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  7. Decomposition of indwelling EMG signals

    PubMed Central

    Nawab, S. Hamid; Wotiz, Robert P.; De Luca, Carlo J.

    2008-01-01

    Decomposition of indwelling electromyographic (EMG) signals is challenging in view of the complex and often unpredictable behaviors and interactions of the action potential trains of different motor units that constitute the indwelling EMG signal. These phenomena create a myriad of problem situations that a decomposition technique needs to address to attain completeness and accuracy levels required for various scientific and clinical applications. Starting with the maximum a posteriori probability classifier adapted from the original precision decomposition system (PD I) of LeFever and De Luca (25, 26), an artificial intelligence approach has been used to develop a multiclassifier system (PD II) for addressing some of the experimentally identified problem situations. On a database of indwelling EMG signals reflecting such conditions, the fully automatic PD II system is found to achieve a decomposition accuracy of 86.0% despite the fact that its results include low-amplitude action potential trains that are not decomposable at all via systems such as PD I. Accuracy was established by comparing the decompositions of indwelling EMG signals obtained from two sensors. At the end of the automatic PD II decomposition procedure, the accuracy may be enhanced to nearly 100% via an interactive editor, a particularly significant fact for the previously indecomposable trains. PMID:18483170

  8. Thermal decomposition of ethylpentaborane in gas phase

    NASA Technical Reports Server (NTRS)

    Mcdonald, Glen E

    1956-01-01

    The thermal decomposition of ethylpentaborane at temperatures of 185 degrees to 244 degrees C is approximately a 1.5-order reaction. The products of the decomposition were hydrogen, methane, a nonvolatile boron hydride, and traces of decaborane. Measurements of the rate of decomposition of pentaborane showed that ethylpentaborane has a greater rate of decomposition than pentaborane.

  9. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  10. The path decomposition expansion and multidimensional tunneling

    NASA Astrophysics Data System (ADS)

    Auerbach, Assa; Kivelson, S.

    This paper consists of two main topics. (i) The path decomposition expansion: a new path integral technique which allows us to break configuration space into disjoint regions and express the dynamics of the full system in terms of its parts. (ii) The application of the PDX and semiclassical methods for solving quantum-mechanical tunneling problems in multidimensions. The result is a conceptually simple, computationally straightforward method for calculating tunneling effects in complicated multidimensional potentials, even in cases where the nature of the states in the classically allowed regions is nontrivial. Algorithms for computing tunneling effects in general classes of problems are obtained.In addition, we present the detailed solutions to three model problems of a tunneling coordinate coupled to a phonon. This enables us to define various well-controlled approximation schemes, which help to reduce the dimensions of complicated tunneling calculations in real physical systems.

  11. Gaussian Decomposition of Laser Altimeter Waveforms

    NASA Technical Reports Server (NTRS)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  12. Singular value decomposition for collaborative filtering on a GPU

    NASA Astrophysics Data System (ADS)

    Kato, Kimikazu; Hosino, Tikara

    2010-06-01

    A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.

  13. TU-F-18A-02: Iterative Image-Domain Decomposition for Dual-Energy CT

    SciTech Connect

    Niu, T; Dong, X; Petrongolo, M; Zhu, L

    2014-06-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative

  14. Ocean Models and Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Salas-de-Leon, D. A.

    2007-05-01

    The increasing computational developments and the better understanding of mathematical and physical systems resulted in an increasing number of ocean models. Long time ago, modelers were like a secret organization and recognize each other by using secret codes and languages that only a select group of people was able to recognize and understand. The access to computational systems was reduced, on one hand equipment and the using time of computers were expensive and restricted, and on the other hand, they required an advance computational languages that not everybody wanted to learn. Now a days most college freshman own a personal computer (PC or laptop), and/or have access to more sophisticated computational systems than those available for research in the early 80's. The resource availability resulted in a mayor access to all kind models. Today computer speed and time and the algorithms does not seem to be a problem, even though some models take days to run in small computational systems. Almost every oceanographic institution has their own model, what is more, in the same institution from one office to the next there are different models for the same phenomena, developed by different research member, the results does not differ substantially since the equations are the same, and the solving algorithms are similar. The algorithms and the grids, constructed with algorithms, can be found in text books and/or over the internet. Every year more sophisticated models are constructed. The Proper Orthogonal Decomposition is a technique that allows the reduction of the number of variables to solve keeping the model properties, for which it can be a very useful tool in diminishing the processes that have to be solved using "small" computational systems, making sophisticated models available for a greater community.

  15. Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.

    ERIC Educational Resources Information Center

    Alexopoulos, John; Abraham, Paul

    2001-01-01

    Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…

  16. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  17. Thermal decomposition of n-alkanes under supercritical conditions

    SciTech Connect

    Yu, J.; Eser, S.

    1996-10-01

    The future aircraft fuel system may be operating at temperatures above the critical points of fuels. Currently there is very limited information on the thermal stability of hydrocarbon fuels under supercritical conditions. In this work, the thermal stressing experiments of n-decane, n-dodecane, n-tetradecane, their mixtures, and an n-paraffin mixture, Norpar-13, was carried out under supercritical conditions. The experimental results indicated that the thermal decomposition of n-alkanes can be represented well by the first-order kinetics. Pressure has significant effects on the first-order rate constant and product distribution in the near-critical region. The major products are a series of n-alkanes and 1-alkenes. The relative yields of n-alkanes and 1-alkenes depend on the reaction conditions. The first-order rate constants for the thermal decomposition of individual compounds in a mixture are different from those obtained for the decomposition of pure compounds.

  18. Thermal decomposition products of butyraldehyde.

    PubMed

    Hatten, Courtney D; Kaskey, Kevin R; Warner, Brian J; Wright, Emily M; McCunn, Laura R

    2013-12-01

    The thermal decomposition of gas-phase butyraldehyde, CH3CH2CH2CHO, was studied in the 1300-1600 K range with a hyperthermal nozzle. Products were identified via matrix-isolation Fourier transform infrared spectroscopy and photoionization mass spectrometry in separate experiments. There are at least six major initial reactions contributing to the decomposition of butyraldehyde: a radical decomposition channel leading to propyl radical + CO + H; molecular elimination to form H2 + ethylketene; a keto-enol tautomerism followed by elimination of H2O producing 1-butyne; an intramolecular hydrogen shift and elimination producing vinyl alcohol and ethylene, a β-C-C bond scission yielding ethyl and vinoxy radicals; and a γ-C-C bond scission yielding methyl and CH2CH2CHO radicals. The first three reactions are analogous to those observed in the thermal decomposition of acetaldehyde, but the latter three reactions are made possible by the longer alkyl chain structure of butyraldehyde. The products identified following thermal decomposition of butyraldehyde are CO, HCO, CH3CH2CH2, CH3CH2CH=C=O, H2O, CH3CH2C≡CH, CH2CH2, CH2=CHOH, CH2CHO, CH3, HC≡CH, CH2CCH, CH3C≡CH, CH3CH=CH2, H2C=C=O, CH3CH2CH3, CH2=CHCHO, C4H2, C4H4, and C4H8. The first ten products listed are direct products of the six reactions listed above. The remaining products can be attributed to further decomposition reactions or bimolecular reactions in the nozzle.

  19. On the sequences ri, si, ti ∈ ℤ related to extended Euclidean algorithm and continued fractions

    NASA Astrophysics Data System (ADS)

    Muhammad, Khairun Nisak; Kamarulhaili, Hailiza

    2016-06-01

    The extended Euclidean Algorithm is a practical technique used in many cryptographic applications, where it computes the sequences ri, si, ti ∈ ℤ that always satisfy ri = si a+ tib. The integer ri is the remainder in the ith sequences. The sequences si and ti arising from the extended Euclidean algorithm are equal, up to sign, to the convergents of the continued fraction expansion of a/b. The values of (ri, si, ti) satisfy various properties which are used to solve the shortest vector problem in representing point multiplications in elliptic curves cryptography, namely the GLV (Gallant, Lambert & Vanstone) integer decomposition method and the ISD (integer sub decomposition) method. This paper is to extend the proof for each of the existing properties on (ri, si, ti). We also generate new properties which are relevant to the sequences ri, si, ti ∈ ℤ. The concepts of Euclidean algorithm, extended Euclidean algorithm and continued fractions are intertwined and the properties related to these concepts are proved. These properties together with the existing properties of the sequence (ri, si, ti) are regarded as part and parcel of the building blocks of a new generation of an efficient cryptographic protocol.

  20. Secondary decomposition reactions in nitramines

    NASA Astrophysics Data System (ADS)

    Schweigert, Igor

    Thermal decomposition of nitramines is known to proceed via multiple, competing reaction branches, some of which are triggered by secondary reactions between initial decomposition products and unreacted nitramine molecules. Better mechanistic understanding of these secondary reactions is needed to enable extrapolations of measured rates to higher temperatures and pressures relevant to shock ignition. I will present density functional theory (DFT) based simulations of nitramines that aim to re-evaluate known elementary mechanisms and seek alternative pathways in the gas and condensed phases. This work was supported by the Office of Naval Research, both directly and through the Naval Research Laboratory.

  1. A Decomposition Method Based on a Model of Continuous Change

    PubMed Central

    HORIUCHI, SHIRO; WILMOTH, JOHN R.; PLETCHER, SCOTT D.

    2008-01-01

    A demographic measure is often expressed as a deterministic or stochastic function of multiple variables (covariates), and a general problem (the decomposition problem) is to assess contributions of individual covariates to a difference in the demographic measure (dependent variable) between two populations. We propose a method of decomposition analysis based on an assumption that covariates change continuously along an actual or hypothetical dimension. This assumption leads to a general model that logically justifies the additivity of covariate effects and the elimination of interaction terms, even if the dependent variable itself is a nonadditive function. A comparison with earlier methods illustrates other practical advantages of the method: in addition to an absence of residuals or interaction terms, the method can easily handle a large number of covariates and does not require a logically meaningful ordering of covariates. Two empirical examples show that the method can be applied flexibly to a wide variety of decomposition problems. This study also suggests that when data are available at multiple time points over a long interval, it is more accurate to compute an aggregated decomposition based on multiple subintervals than to compute a single decomposition for the entire study period. PMID:19110897

  2. Smooth PARAFAC Decomposition for Tensor Completion

    NASA Astrophysics Data System (ADS)

    Yokota, Tatsuya; Zhao, Qibin; Cichocki, Andrzej

    2016-10-01

    In recent years, low-rank based tensor completion, which is a higher-order extension of matrix completion, has received considerable attention. However, the low-rank assumption is not sufficient for the recovery of visual data, such as color and 3D images, where the ratio of missing data is extremely high. In this paper, we consider "smoothness" constraints as well as low-rank approximations, and propose an efficient algorithm for performing tensor completion that is particularly powerful regarding visual data. The proposed method admits significant advantages, owing to the integration of smooth PARAFAC decomposition for incomplete tensors and the efficient selection of models in order to minimize the tensor rank. Thus, our proposed method is termed as "smooth PARAFAC tensor completion (SPC)." In order to impose the smoothness constraints, we employ two strategies, total variation (SPC-TV) and quadratic variation (SPC-QV), and invoke the corresponding algorithms for model learning. Extensive experimental evaluations on both synthetic and real-world visual data illustrate the significant improvements of our method, in terms of both prediction performance and efficiency, compared with many state-of-the-art tensor completion methods.

  3. LP and NLP decomposition without a master problem

    SciTech Connect

    Fuller, D.; Lan, B.

    1994-12-31

    We describe a new algorithm for decomposition of linear programs and a class of convex nonlinear programs, together with theoretical properties and some test results. Its most striking feature is the absence of a master problem; the subproblems pass primal and dual proposals directly to one another. The algorithm is defined for multi-stage LPs or NLPs, in which the constraints link the current stage`s variables to earlier stages` variables. This problem class is general enough to include many problem structures that do not immediately suggest stages, such as block diagonal problems. The basic algorithmis derived for two-stage problems and extended to more than two stages through nested decomposition. The main theoretical result assures convergence, to within any preset tolerance of the optimal value, in a finite number of iterations. This asymptotic convergence result contrasts with the results of limited tests on LPs, in which the optimal solution is apparently found exactly, i.e., to machine accuracy, in a small number of iterations. The tests further suggest that for LPs, the new algorithm is faster than the simplex method applied to the whole problem, as long as the stages are linked loosely; that the speedup over the simpex method improves as the number of stages increases; and that the algorithm is more reliable than nested Dantzig-Wolfe or Benders` methods in its improvement over the simplex method.

  4. How Is Morphological Decomposition Achieved?

    ERIC Educational Resources Information Center

    Libben, Gary

    1994-01-01

    Two experiments investigated morphological decomposition in ambiguous novel compounds such as "busheater," which can be parsed as either "bus-heater" or "bush-heater." It was found that subjects' parsing choices for such words are influenced by orthographic constraints but that these constraints do not operate prelexically. (33 references) (MDM)

  5. Cadaver decomposition in terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Carter, David O.; Yellowlees, David; Tibbett, Mark

    2007-01-01

    A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon decomposition. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver decomposition remains a neglected microsere. Here we review the processes associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver decomposition can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver decomposition island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems.

  6. Microbial interactions during carrion decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This addresses the microbial ecology of carrion decomposition in the age of metagenomics. It describes what is known about the microbial communities on carrion, including a brief synopsis about the communities on other organic matter sources. It provides a description of studies using state-of-the...

  7. Robust Morse decompositions of piecewise constant vector fields.

    PubMed

    Szymczak, Andrzej; Zhang, Eugene

    2012-06-01

    In this paper, we introduce a new approach to computing a Morse decomposition of a vector field on a triangulated manifold surface. The basic idea is to convert the input vector field to a piecewise constant (PC) vector field, whose trajectories can be computed using simple geometric rules. To overcome the intrinsic difficulty in PC vector fields (in particular, discontinuity along mesh edges), we borrow results from the theory of differential inclusions. The input vector field and its PC variant have similar Morse decompositions. We introduce a robust and efficient algorithm to compute Morse decompositions of a PC vector field. Our approach provides subtriangle precision for Morse sets. In addition, we describe a Morse set classification framework which we use to color code the Morse sets in order to enhance the visualization. We demonstrate the benefits of our approach with three well-known simulation data sets, for which our method has produced Morse decompositions that are similar to or finer than those obtained using existing techniques, and is over an order of magnitude faster. PMID:21747131

  8. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  9. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE PAGES

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; Shephard, Mark S.

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  10. Empirical mode decomposition as a time-varying multirate signal processing system

    NASA Astrophysics Data System (ADS)

    Yang, Yanli

    2016-08-01

    Empirical mode decomposition (EMD) can adaptively split composite signals into narrow subbands termed intrinsic mode functions (IMFs). Although an analytical expression of IMFs extracted by EMD from signals is introduced in Yang et al. (2013) [1], it is only used for the case of extrema spaced uniformly. In this paper, the EMD algorithm is analyzed from digital signal processing perspective for the case of extrema spaced nonuniformly. Firstly, the extrema extraction is represented by a time-varying extrema decimator. The nonuniform extrema extraction is analyzed through modeling the time-varying extrema decimation at a fixed time point as a time-invariant decimation. Secondly, by using the impulse/summation approach, spline interpolation for knots spaced nonuniformly is shown as two basic operations, time-varying interpolation and filtering by a time-varying spline filter. Thirdly, envelopes of signals are written as the output of the time-varying spline filter. An expression of envelopes of signals in both time and frequency domain is presented. The EMD algorithm is then described as a time-varying multirate signal processing system. Finally, an equation to model IMFs is derived by using a matrix formulation in time domain for the general case of extrema spaced nonuniformly.

  11. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  12. Spectral decomposition of the aerodynamic noise generated by rotating sources

    NASA Astrophysics Data System (ADS)

    Bongiovì, Alessandro; Cattanei, Andrea

    2011-01-01

    A method is posed for separating the noise emitted by an aerodynamic source from propagation effects using spectral decomposition. This technique is applied to the power spectra of a fan measured at several rotational speeds. Although it has been conceived for rotating sources as turbomachinery rotors, the method may be easily applied to low speed stationary sources such as jets and flows in stators and about isolated airfoils. Based on the similarity theory, a clear description of the structure of the power spectrum of the received noise is given and the effect of rotational speed variations is considered as a means to obtain a data set suitable to perform the spectral decomposition. The problem is analyzed in order to clarify possibilities and limitations of the method and then an algorithm is presented which is based on the solution of the derived equations. Particular care is devoted to both the numerical details and the operative aspects. The validation of the algorithm is performed by means of numerically generated input data. Next, in order to verify the ability of the method in separating scattered from emitted sound, an automotive cooling fan has been tested in the DIMSET hemi-anechoic room in a free-field configuration and with a shielded microphone. These two apparently distinct spectra collapse to within less than 2 dB after the spectral decomposition has been performed. The tests prove the ability of the method despite the modest quantity of input data.

  13. Tremolite Decomposition and Water on Venus

    NASA Technical Reports Server (NTRS)

    Johnson, N. M.; Fegley, B., Jr.

    2000-01-01

    We present experimental data showing that the decomposition rate of tremolite, a hydrous mineral, is sufficiently slow that it can survive thermal decomposition on Venus over geologic timescales at current and higher surface temperatures.

  14. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  15. Thermal decomposition of high-nitrogen energetic compounds: TAGzT and GUzT

    NASA Astrophysics Data System (ADS)

    Hayden, Heather F.

    The U.S. Navy is exploring high-nitrogen compounds as burning-rate additives to meet the growing demands of future high-performance gun systems. Two high-nitrogen compounds investigated as potential burning-rate additives are bis(triaminoguanidinium) 5,5-azobitetrazolate (TAGzT) and bis(guanidinium) 5,5'-azobitetrazolate (GUzT). Small-scale tests showed that formulations containing TAGzT exhibit significant increases in the burning rates of RDX-based gun propellants. However, when GUzT, a similarly structured molecule was incorporated into the formulation, there was essentially no effect on the burning rate of the propellant. Through the use of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and Fourier-Transform ion cyclotron resonance (FTICR) mass spectrometry methods, an investigation of the underlying chemical and physical processes that control the thermal decomposition behavior of TAGzT and GUzT alone and in the presence of RDX, was conducted. The objective was to determine why GUzT is not as good a burning-rate enhancer in RDX-based gun propellants as compared to TAGzT. The results show that TAGzT is an effective burning-rate modifier in the presence of RDX because the decomposition of TAGzT alters the initial stages of the decomposition of RDX. Hydrazine, formed in the decomposition of TAGzT, reacts faster with RDX than RDX can decompose itself. The reactions occur at temperatures below the melting point of RDX and thus the TAGzT decomposition products react with RDX in the gas phase. Although there is no hydrazine formed in the decomposition of GUzT, amines formed in the decomposition of GUzT react with aldehydes, formed in the decomposition of RDX, resulting in an increased reaction rate of RDX in the presence of GUzT. However, GUzT is not an effective burning-rate modifier because its decomposition does not alter the initial gas-phase decomposition of RDX. The decomposition of GUzT occurs at temperatures above the melting point

  16. A parallel householder tridiagonalization stratagem using scattered row decomposition

    NASA Technical Reports Server (NTRS)

    Chang, H. Y.; Utku, S.; Salama, M.; Rapp, D.

    1988-01-01

    Householder's method for tridiagonalizing a real symmetric matrix, a major step in evaluating eigenvalues of the matrix, is modified into a parallel algorithm for a concurrent machine of message passing type. Each processor of the concurrent machine has its own CPU, communications control and local memory. Messages are passed through connections between processors. Although the basic algorithm is inherently serial, the computations can be spread over all processors by scattering different rows of the matrix into processors, hence the term 'Scattered Row Decomposition'. The steps in the serial and the parallel algorithms are identified. Expressions for efficiency and speedup are given in terms of problem and machine parameters. For a concurrent machine of ring type interconnection, a selected representative problem of large order exhibits efficiency approaching 66 per cent.

  17. Investigating hydrogel dosimeter decomposition by chemical methods

    NASA Astrophysics Data System (ADS)

    Jordan, Kevin

    2015-01-01

    The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products.

  18. 16-point discrete Fourier transform based on the Radix-2 FFT algorithm implemented into cyclone FPGA as the UHECR trigger for horizontal air showers in the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Szadkowski, Z.

    2006-05-01

    Extremely rare flux of UHERC requires sophisticated detection techniques. Standard methods oriented on the typical events may not be sensitive enough to capture rare events, crucial to fix a discrepancy in the current data or to confirm/reject some new hypothesis. Currently used triggers in water Cherenkov tanks in the Pierre Auger surface detector, which select events above some amplitude thresholds or investigate a length of traces are not optimized to the horizontal and very inclined showers, interesting as potentially generated by neutrinos. Those showers could be triggered using their signatures: i.e. a curvature of the shower front, transformed on the rise time of traces or muon component giving early peak for "old" showers. Currently available powerful and cost-effective FPGAs provide sufficient resources to implement new triggers not available in the past. The paper describes the implementation proposal of 16-point discrete Fourier transform based on the Radix-2 FFT algorithm into Altera Cyclone FPGA, used in the 3rd generation of the surface detector trigger. All complex coefficients are calculated online in heavy pipelined routines. The register performance ˜200 MHz and relatively low resources occupancy ˜2000 logic elements/channel for 10-bit resolution provide a powerful tool to trigger the events on the traces characteristic in the frequency domain. The FFT code has been successively merged to the code of the 1st surface selector level trigger of the Pierre Auger Observatory and is planned to be tested in real pampas environment.

  19. Easy quantitative assessment of genome editing by sequence trace decomposition.

    PubMed

    Brinkman, Eva K; Chen, Tao; Amendola, Mario; van Steensel, Bas

    2014-12-16

    The efficacy and the mutation spectrum of genome editing methods can vary substantially depending on the targeted sequence. A simple, quick assay to accurately characterize and quantify the induced mutations is therefore needed. Here we present TIDE, a method for this purpose that requires only a pair of PCR reactions and two standard capillary sequencing runs. The sequence traces are then analyzed by a specially developed decomposition algorithm that identifies the major induced mutations in the projected editing site and accurately determines their frequency in a cell population. This method is cost-effective and quick, and it provides much more detailed information than current enzyme-based assays. An interactive web tool for automated decomposition of the sequence traces is available. TIDE greatly facilitates the testing and rational design of genome editing strategies.

  20. The Effect of Clothing on the Rate of Decomposition and Diptera Colonization on Sus scrofa Carcasses.

    PubMed

    Card, Allison; Cross, Peter; Moffatt, Colin; Simmons, Tal

    2015-07-01

    Twenty Sus scrofa carcasses were used to study the effect the presence of clothing had on decomposition rate and colonization locations of Diptera species; 10 unclothed control carcasses were compared to 10 clothed experimental carcasses over 58 days. Data collection occurred at regular accumulated degree day intervals; the level of decomposition as Total Body Score (TBSsurf ), pattern of decomposition, and Diptera present was documented. Results indicated a statistically significant difference in the rate of decomposition, (t427  = 2.59, p = 0.010), with unclothed carcasses decomposing faster than clothed carcasses. However, the overall decomposition rates from each carcass group are too similar to separate when applying a 95% CI, which means that, although statistically significant, from a practical forensic point of view they are not sufficiently dissimilar as to warrant the application of different formulae to estimate the postmortem interval. Further results demonstrated clothing provided blow flies with additional colonization locations.

  1. Revenge of the Semicoarsening Frequency Decomposition Multigrid Method

    NASA Technical Reports Server (NTRS)

    Dendy, J. E., Jr.

    1996-01-01

    The frequency decomposition multigrid method was previously considered and modified so as to obtain robustness for problems with discontinuous coefficients while retaining robustness for problems with anisotropic coefficients. The application of this modified method to a problem arising in global ocean modeling was also considered. For this problem it was shown that the discretization employed gives rise to an operator for which point relaxation is not robust. In fact, alternating line relaxation is required for robustness, negating the main advantage of the frequency decomposition method: robustness for anisotropic operators using only point relaxation. In this paper a semicoarsening variant, which requires line relaxation in one direction only is considered, and it is shown that this variant works well for the global ocean modeling problem.

  2. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  3. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  4. Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation

    DOE PAGES

    Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir

    2016-05-01

    We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.

  5. Improvement of the Koradi parallel algorithm for molecular dynamics and application to the economic organization and optimization of recycling costs of waste electrical and electronic equipment

    NASA Astrophysics Data System (ADS)

    Cabria, I.; Queiruga, D.

    2005-09-01

    A parallel algorithm for molecular dynamics, MD, the Koradi point-centered decomposition algorithm, especially designed for inhomogeneous systems, is improved and applied to the organization and optimization of recycling costs of Waste Electrical and Electronic Equipment, WEEE, and also to systems of atoms. This organization requires the numbers and locations of storage centers and recycling plants of the WEEE that minimize the recycling cost. The Koradi algorithm finds these optimal numbers and locations, dealing very fast with large numbers of data, in contrast with other methods. The changes of the original algorithm (different ways of generating the initial centers and especially the requirement of location convergence) improve its performance for this economic problem and also for MD simulations.

  6. Aflatoxin decomposition in various soils

    SciTech Connect

    Angle, J.S.

    1986-08-01

    The persistence of aflatoxin in the soil environment could potentially result in a number of adverse environmental consequences. To determine the persistence of aflatoxin in soil, /sup 14/C-labeled aflatoxin B1, was added to silt loam, sandy loam, and silty clay loam soils and the subsequent release of /sup 14/CO/sub 2/ was determined. After 120 days of incubation, 8.1% of the original aflatoxin added to the silt loam soil was released as CO/sub 2/. Aflatoxin decomposition in the sandy loam soil proceeded more quickly than the other two soils for the first 20 days of incubation. After this time, the decomposition rate declined and by the end of the study, 4.9% of the aflatoxin was released as CO/sub 2/. Aflatoxin decomposition proceeded most slowly in the silty clay loam soil. Only 1.4% of aflatoxin added to the soil was released as CO/sub 2/ after 120 days incubation. To determine whether aflatoxin was bound to the silty clay loam soil, aflatoxin B1 was added to this soil and incubated for 20 days. The soil was periodically extracted and the aflatoxin species present were determined using thin layer chromatographic (TLC) procedures. After one day of incubation, the degradation products, aflatoxins B2 and G2, were observed. It was also found that much of the aflatoxin extracted from the soil was not mobile with the TLC solvent system used. This indicated that a conjugate may have formed and thus may be responsible for the lack of aflatoxin decomposition.

  7. Phlogopite Decomposition, Water, and Venus

    NASA Technical Reports Server (NTRS)

    Johnson, N. M.; Fegley, B., Jr.

    2005-01-01

    Venus is a hot and dry planet with a surface temperature of 660 to 740 K and 30 parts per million by volume (ppmv) water vapor in its lower atmosphere. In contrast Earth has an average surface temperature of 288 K and 1-4% water vapor in its troposphere. The hot and dry conditions on Venus led many to speculate that hydrous minerals on the surface of Venus would not be there today even though they might have formed in a potentially wetter past. Thermodynamic calculations predict that many hydrous minerals are unstable under current Venusian conditions. Thermodynamics predicts whether a particular mineral is stable or not, but we need experimental data on the decomposition rate of hydrous minerals to determine if they survive on Venus today. Previously, we determined the decomposition rate of the amphibole tremolite, and found that it could exist for billions of years at current surface conditions. Here, we present our initial results on the decomposition of phlogopite mica, another common hydrous mineral on Earth.

  8. Implementation of parallel matrix decomposition for NIKE3D on the KSR1 system

    SciTech Connect

    Su, Philip S.; Fulton, R.E.; Zacharia, T.

    1995-06-01

    New massively parallel computer architecture has revolutionized the design of computer algorithms and promises to have significant influence on algorithms for engineering computations. Realistic engineering problems using finite element analysis typically imply excessively large computational requirements. Parallel supercomputers that have the potential for significantly increasing calculation speeds can meet these computational requirements. This report explores the potential for the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm on NIKE3D through actual computations. The examples of two- and three-dimensional nonlinear dynamic finite element problems are presented on the Kendall Square Research (KSR1) multiprocessor system, with 64 processors, at Oak Ridge National Laboratory. The numerical results indicate that the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm is attractive for NIKE3D under multi-processor system environments.

  9. Decomposition of frequency characteristics of acoustic emission signals for different types of partial discharges sources

    NASA Astrophysics Data System (ADS)

    Witos, F.; Gacek, Z.; Paduch, P.

    2006-11-01

    The problem touched in the article is decomposition of frequency characteristic of AE signals into elementary form of three-parametrical Gauss function. At the first stage, for modelled curves in form of sum of three-parametrical Gauss peaks, accordance of modelled curve and a curve resulting from a solutions obtained using method with dynamic windows, Levenberg-Marquardt algorithm, genetic algorithms and differential evolution algorithm are discussed. It is founded that analyses carried out by means differential evolution algorithm are effective and the computer system served an analysis of AE signal frequency characteristics was constructed. Decomposition of frequency characteristics for selected AE signals coming from modelled PD sources using different ends of the bushing, and real PD sources in generator coil bars are carried out.

  10. Performance of the Wavelet Decomposition on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.

  11. Tensor network decompositions in the presence of a global symmetry

    SciTech Connect

    Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre

    2010-11-15

    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.

  12. Two decoupling methods for non-isothermal DSC results of AIBN decomposition.

    PubMed

    Zhang, Cai-Xing; Lu, Gui-Bin; Chen, Li-Ping; Chen, Wang-Hua; Peng, Min-Jun; Lv, Jia-Yu

    2015-03-21

    During thermal decomposition of azobisisobutyronitrile (AIBN), the endothermic process of phase transition disturbed exothermic decomposition, which brought deformation in its thermal graphs. Therefore, exact kinetic parameters of the decomposition could not be obtained by the existing kinetics analytic models, and the accurate enthalpy data of the decomposition and phase transition were not available. Two methods, i.e., a solvent method and a mathematical method, were introduced in this paper to resolve the coupling phenomenon. In the former method, AIBN was dissolved into aniline to eliminate the endothermic process and obtain curves of the liquid-state decomposition. In the latter method, MATLAB software was employed to get the "pure" exothermic decomposition curve without the influence of phase transition by fitting coupling curves within the section after the transition point and extrapolating to the initial stage of decomposition. Moreover, the kinetic parameters of the "pure" exothermic decomposition of AIBN obtained by the mathematical fitting agreed with the results from the solvent method, verifying the accuracy of the decoupling. The research is of great significance for comprehending the exact characteristics of thermal behaviors and safety parameters of AIBN. It also provides a great help to determine the safe operating temperature and alarm temperature for processes in industry.

  13. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  14. PrinCCes: Continuity-based geometric decomposition and systematic visualization of the void repertoire of proteins.

    PubMed

    Czirják, Gábor

    2015-11-01

    Grooves and pockets on the surface, channels through the protein, the chambers or cavities, and the tunnels connecting the internal points to each other or to the external fluid environment are fundamental determinants of a wide range of biological functions. PrinCCes (Protein internal Channel & Cavity estimation) is a computer program supporting the visualization of voids. It includes a novel algorithm for the decomposition of the entire void volume of the protein or protein complex to individual entities. The decomposition is based on continuity. An individual void is defined by uninterrupted extension in space: a spherical probe can freely move between any two internal locations of a continuous void. Continuous voids are detected irrespective of their topological complexity, they may contain any number of holes and bifurcations. The voids of a protein can be visualized one by one or in combinations as triangulated surfaces. The output is automatically exported to free VMD (Visual Molecular Dynamics) or Chimera software, allowing the 3D rotation of the surfaces and the production of publication quality images. PrinCCes with graphic user interface and command line versions are available for MS Windows and Linux. The source code and executable can be downloaded at any of the following links: http://scholar.semmelweis.hu/czirjakgabor/s/princces/#t1 https://github.com/CzirjakGabor/PrinCCes http://1drv.ms/1bP9iJ3.

  15. PrinCCes: Continuity-based geometric decomposition and systematic visualization of the void repertoire of proteins.

    PubMed

    Czirják, Gábor

    2015-11-01

    Grooves and pockets on the surface, channels through the protein, the chambers or cavities, and the tunnels connecting the internal points to each other or to the external fluid environment are fundamental determinants of a wide range of biological functions. PrinCCes (Protein internal Channel & Cavity estimation) is a computer program supporting the visualization of voids. It includes a novel algorithm for the decomposition of the entire void volume of the protein or protein complex to individual entities. The decomposition is based on continuity. An individual void is defined by uninterrupted extension in space: a spherical probe can freely move between any two internal locations of a continuous void. Continuous voids are detected irrespective of their topological complexity, they may contain any number of holes and bifurcations. The voids of a protein can be visualized one by one or in combinations as triangulated surfaces. The output is automatically exported to free VMD (Visual Molecular Dynamics) or Chimera software, allowing the 3D rotation of the surfaces and the production of publication quality images. PrinCCes with graphic user interface and command line versions are available for MS Windows and Linux. The source code and executable can be downloaded at any of the following links: http://scholar.semmelweis.hu/czirjakgabor/s/princces/#t1 https://github.com/CzirjakGabor/PrinCCes http://1drv.ms/1bP9iJ3. PMID:26409191

  16. Three-Component Decomposition Based on Stokes Vector for Compact Polarimetric SAR

    PubMed Central

    Wang, Hanning; Zhou, Zhimin; Turnbull, John; Song, Qian; Qi, Feng

    2015-01-01

    In this paper, a three-component decomposition algorithm is proposed for processing compact polarimetric SAR images. By using the correspondence between the covariance matrix and the Stokes vector, three-component scattering models for CTLR and DCP modes are established. The explicit expression of decomposition results is then derived by setting the contribution of volume scattering as a free parameter. The degree of depolarization is taken as the upper bound of the free parameter, for the constraint that the weighting factor of each scattering component should be nonnegative. Several methods are investigated to estimate the free parameter suitable for decomposition. The feasibility of this algorithm is validated by AIRSAR data over San Francisco and RADARSAT-2 data over Flevoland. PMID:26393610

  17. Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors

    SciTech Connect

    Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep

    2014-12-01

    The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.

  18. Improving radiation data quality of USDA UV-B monitoring and research program and evaluating UV decomposition in DayCent and its ecological impacts

    NASA Astrophysics Data System (ADS)

    Chen, Maosi

    from an improved cloud screening algorithm that utilizes an iterative rejection of cloudy points based on a decreasing tolerance of unstable optical depth behavior when calibration information is unknown. A MODTRAN radiative transfer model simulation showed the new cloud screening algorithm was capable of screening cloudy points while retaining clear-sky points. The comparison results showed that the cloud-free points determined by the new cloud screening algorithm generated significantly (56%) more and unbiased Langley offset voltages (VLOs) for both partly cloudy days and sunny days at two testing sites, Hawaii and Florida. The V¬LOs are proportional to the radiometric sensitivity. The stability of the calibration is also improved by the development of a two-stage reference channel calibration method for collocated UV-MFRSR and MFRSR instruments. Special channels where aerosol is the only contributor to total optical depth (TOD) variation (e.g. 368-nm channel) were selected and the radiative transfer model (MODTRAN) used to calculate direct normal and diffuse horizontal ratios which were used to evaluate the stability of TOD in cloud-free points. The spectral dependence of atmospheric constituents' optical properties and previously calibrated channels were used to find stable TOD points and perform Langley calibration at spectrally adjacent channels. The test of this method on the UV-B program site at Homestead, Florida (FL02) showed that the new method generated more clustered and abundant VLOs at all (UV-) MFRSR channels and potentially improved the accuracy by 2-4% at most channels and over 10% at 300-nm and 305-nm channels. In the second major part of this work, I calibrated the DayCent-UV model with ecosystem variables (e.g. soil water, live biomass), allowed maximum photodecay rate to vary with litter's initial lignin fraction in the model, and validated the optimized model with LIDET observation of remaining carbon and nitrogen at three semi-arid sites. I

  19. Optimized curvelet-based empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Wu, Renjie; Zhang, Qieshi; Kamata, Sei-ichiro

    2015-02-01

    The recent years has seen immense improvement in the development of signal processing based on Curvelet transform. The Curvelet transform provide a new multi-resolution representation. The frame elements of Curvelets exhibit higher direction sensitivity and anisotropic than the Wavelets, multi-Wavelets, steerable pyramids, and so on. These features are based on the anisotropic notion of scaling. In practical instances, time series signals processing problem is often encountered. To solve this problem, the time-frequency analysis based methods are studied. However, the time-frequency analysis cannot always be trusted. Many of the new methods were proposed. The Empirical Mode Decomposition (EMD) is one of them, and widely used. The EMD aims to decompose into their building blocks functions that are the superposition of a reasonably small number of components, well separated in the time-frequency plane. And each component can be viewed as locally approximately harmonic. However, it cannot solve the problem of directionality of high-dimensional. A reallocated method of Curvelet transform (optimized Curvelet-based EMD) is proposed in this paper. We introduce a definition for a class of functions that can be viewed as a superposition of a reasonably small number of approximately harmonic components by optimized Curvelet family. We analyze this algorithm and demonstrate its results on data. The experimental results prove the effectiveness of our method.

  20. Singular Value Decomposition of Pinhole SPECT Systems.

    PubMed

    Palit, Robin; Kupinski, Matthew A; Barrett, Harrison H; Clarkson, Eric W; Aarsvold, John N; Volokh, Lana; Grobshtein, Yariv

    2009-03-12

    A single photon emission computed tomography (SPECT) imaging system can be modeled by a linear operator H that maps from object space to detector pixels in image space. The singular vectors and singular-value spectra of H provide useful tools for assessing system performance. The number of voxels used to discretize object space and the number of collection angles and pixels used to measure image space make the matrix dimensions H large. As a result, H must be stored sparsely which renders several conventional singular value decomposition (SVD) methods impractical. We used an iterative power methods SVD algorithm (Lanczos) designed to operate on very large sparsely stored matrices to calculate the singular vectors and singular-value spectra for two small animal pinhole SPECT imaging systems: FastSPECT II and M(3)R. The FastSPECT II system consisted of two rings of eight scintillation cameras each. The resulting dimensions of H were 68921 voxels by 97344 detector pixels. The M(3)R system is a four camera system that was reconfigured to measure image space using a single scintillation camera. The resulting dimensions of H were 50864 voxels by 6241 detector pixels. In this paper we present results of the SVD of each system and discuss calculation of the measurement and null space for each system.

  1. The Path Decomposition Expansion and Multidimensional Tunneling

    NASA Astrophysics Data System (ADS)

    Auerbach, Assa

    The dissertation consists of two main topics. (a) The Path Decomposition Expansion (PDX): A new path integral technique which allows us to break configuration space into disjoint regions, and express the dynamics of the full system in terms of its parts. (b) The application of the PDX and semiclassical methods for solving quantum -mechanical problems in multidimensions. The result is a conceptually simple, computationally straightforward method for calculating tunneling effects in complicated multidimensional potentials, even in cases where the nature of the states in the classically allowed regions in nontrivial. Algorithms for computing tunneling effects in general classes of problems are obtained. The detailed solutions to several model problems are presented. These enable us to define various well -controlled approximation schemes, which help to reduce the dimensions of complicated tunneling calculations in real physical systems. The dramatic effects of transverse fluctuations on the asymptotic behavior of the groundstate tunnel-splitting are studied also in potentials with non -quadratic minima where standard instanton techniques fail. The power of the PDX is demonstrated by a calculation of the optical absorption coefficient of trans-polyacetylene where large amplitude (non-perturbative) quantum fluctuations of the lattice play an important role in determining the sub-gap absorption tail. Good agreement with experimental data is found, and suggestions for further measurements in this regime are made.

  2. Decomposition Technique for Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  3. Global patterns in litter decomposition: a synthesis.

    NASA Astrophysics Data System (ADS)

    Auch, W. E.; Ross, D. S.

    2007-12-01

    Leaf and coarse woody debris (LCWD) decay catalyzes the biochemical mechanisms of the soil-aboveground interface, and should be an important component of climate change models that address carbon and nitrogen. There is a clear need for the identification of determinant climate or litter chemistry parameters at the global scale. Local and global decay is commonly attributed to litter chemistry and climate, respectively. The objective of this synthesis was to illustrate LCWD decay across a global climate-chemistry continuum and contrast results with a previous assessment via both standard first-order (|k|) decay kinetics and gradient exponent values arranged in order of influence from initial to latter decay stages. Results suggest greater initial LCWD cation concentrations yielded the fastest initial rates of decomposition and most climatic indices appeared relevant at intermediate stages of decay. Elevation and refractory LCWD carbon (i.e. carbon, lignin, and tannins) were inversely correlated with decay, prolonging the process and possibly acting in concert as "end-point" determinants. Furthermore, the initial influence of nitrogen and phosphorus is universal across LCWD-type as well as ecoregion. Climate acts in a transitional role between easily solubilized and late or aromatic substrate decay. Global and continental carbon cycling assumptions and models must acknowledge: i) the influence of LCWD cation and N concentration during initial fragmentation, leaching, and transformation; ii) climate, specifically seasonal temperature averages > evapotranspiration > precipitation, during the interim; and iii) the ever-present influence of seasonality and litter aromatic components. Key Words: Leaf and Coarse Woody Debris (LCWD) decomposition, |k|, first-order kinetics, Carbon Cycle, Global Climate Change (GCC), Actual Evapotranspiration (AET).

  4. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    SciTech Connect

    Weinstein, Marvin; Auerbach, Assa; Chandra, V.Ravi; /Technion

    2011-11-04

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. The lattice of size N is partitioned into two subclusters. At each iteration the Lanczos vector is projected into a set of n{sub svd} smaller subcluster vectors using singular value decomposition. For low entanglement entropy S{sub ee}, (satisfied by short range Hamiltonians), we expect the truncation error to vanish as exp(-n{sup 1/S{sub ee}}{sub svd}). Convergence is tested for the Heisenberg model on Kagome clusters of up to 36 sites, with no symmetries exploited, using less than 15GB of memory. Generalization to multiple partitioning is discussed.

  5. Calculating vibrational spectra of molecules using tensor train decomposition

    NASA Astrophysics Data System (ADS)

    Rakhuba, Maxim; Oseledets, Ivan

    2016-09-01

    We propose a new algorithm for calculation of vibrational spectra of molecules using tensor train decomposition. Under the assumption that eigenfunctions lie on a low-parametric manifold of low-rank tensors we suggest using well-known iterative methods that utilize matrix inversion (locally optimal block preconditioned conjugate gradient method, inverse iteration) and solve corresponding linear systems inexactly along this manifold. As an application, we accurately compute vibrational spectra (84 states) of acetonitrile molecule CH3CN on a laptop in one hour using only 100 MB of memory to represent all computed eigenfunctions.

  6. Block diagonal decompositions for parallel computations of large power systems. Final report

    SciTech Connect

    Silijak, D.D.

    1995-05-01

    In this report we present the algorithm and C code for balanced bordered block diagonal (BBD) decompositions of large sparse matrices, as well as a variety of experimental results relating to the algorithm`s performance. The software has been tested on a number of large matrices, including models of the West Coast power network (1,993 x 1,993 matrix). The algorithm was found to compare very well with the symmetric minimal degree ordering in terms of sparsity preservation-in the test cases considered, the BBD decomposition produced only up to 15% more fill-in. This is more than satisfactory considering that BBD structures are far better suited for parallel computing than the scattered and unpredictable element patterns obtained by minimal degree ordering. For some denser matrices, the BBD decomposition was actually seen to produce lower fill-in than the minimal degree. In applications to power systems the execution time for the BBD decomposition was found to have a quadratic upper bound on its complexity, which is comparable to a number of other sparse matrix orderings. Simulation results indicate that the actual execution time is similar to the execution time of the symmetric minimal degree ordering in Matlab 4.0. The special structural advantages of balanced BBD decompositions have been utilized to parallelize the process of LU factorization. The speedups obtained with respect to solutions using symmetric minimal degree ordering on a single processor have confirmed the significant potential of BBD decomposition in parallel computing. For the 1,993 bus power system, a speedup of 11.2 times was obtained using 14 processors on a PVM 2.4.

  7. Bio-empirical mode decomposition: visible and infrared fusion using biologically inspired empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Sissinto, Paterne; Ladeji-Osias, Jumoke

    2013-07-01

    Bio-EMD, a biologically inspired fusion of visible and infrared (IR) images based on empirical mode decomposition (EMD) and color opponent processing, is introduced. First, registered visible and IR captures of the same scene are decomposed into intrinsic mode functions (IMFs) through EMD. The fused image is then generated by an intuitive opponent processing the source IMFs. The resulting image is evaluated based on the amount of information transferred from the two input images, the clarity of details, the vividness of depictions, and range of meaningful differences in lightness and chromaticity. We show that this opponent processing-based technique outperformed other algorithms based on pixel intensity and multiscale techniques. Additionally, Bio-EMD transferred twice the information to the fused image compared to other methods, providing a higher level of sharpness, more natural-looking colors, and similar contrast levels. These results were obtained prior to optimization of color opponent processing filters. The Bio-EMD algorithm has potential applicability in multisensor fusion covering visible bands, forensics, medical imaging, remote sensing, natural resources management, etc.

  8. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  9. GPU Accelerated Event Detection Algorithm

    SciTech Connect

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.

  10. Decomposition Rate and Pattern in Hanging Pigs.

    PubMed

    Lynch-Aird, Jeanne; Moffatt, Colin; Simmons, Tal

    2015-09-01

    Accurate prediction of the postmortem interval requires an understanding of the decomposition process and the factors acting upon it. A controlled experiment, over 60 days at an outdoor site in the northwest of England, used 20 freshly killed pigs (Sus scrofa) as human analogues to study decomposition rate and pattern. Ten pigs were hung off the ground and ten placed on the surface. Observed differences in the decomposition pattern required a new decomposition scoring scale to be produced for the hanging pigs to enable comparisons with the surface pigs. The difference in the rate of decomposition between hanging and surface pigs was statistically significant (p=0.001). Hanging pigs reached advanced decomposition stages sooner, but lagged behind during the early stages. This delay is believed to result from lower variety and quantity of insects, due to restricted beetle access to the aerial carcass, and/or writhing maggots falling from the carcass.

  11. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  12. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  13. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is

  14. Key Point Based Data Analysis Technique

    NASA Astrophysics Data System (ADS)

    Yang, Su; Zhang, Yong

    In this paper, a new framework for data analysis based on the "key points" in data distribution is proposed. Here, the key points contain three types of data points: bridge points, border points, and skeleton points, where our main contribution is the bridge points. For each type of key points, we have developed the corresponding detection algorithm and tested its effectiveness with several synthetic data sets. Meanwhile, we further developed a new hierarchical clustering algorithm SPHC (Skeleton Point based Hierarchical Clustering) to demonstrate the possible applications of the key points acquired. Based on some real-world data sets, we experimentally show that SPHC performs better compared with several classical clustering algorithms including Complete-Link Hierarchical Clustering, Single-Link Hierarchical Clustering, KMeans, Ncut, and DBSCAN.

  15. Decomposition methods in turbulence research

    NASA Astrophysics Data System (ADS)

    Uruba, Václav

    2012-04-01

    Nowadays we have the dynamical velocity vector field of turbulent flow at our disposal coming thanks advances of either mathematical simulation (DNS) or of experiment (time-resolved PIV). Unfortunately there is no standard method for analysis of such data describing complicated extended dynamical systems, which is characterized by excessive number of degrees of freedom. An overview of candidate methods convenient to spatiotemporal analysis for such systems is to be presented. Special attention will be paid to energetic methods including Proper Orthogonal Decomposition (POD) in regular and snapshot variants as well as the Bi-Orthogonal Decomposition (BOD) for joint space-time analysis. Then, stability analysis using Principal Oscillation Patterns (POPs) will be introduced. Finally, the Independent Component Analysis (ICA) method will be proposed for detection of coherent structures in turbulent flow-field defined by time-dependent velocity vector field. Principle and some practical aspects of the methods are to be shown. Special attention is to be paid to physical interpretation of outputs of the methods listed above.

  16. Edge-Preserving Decomposition-Based Single Image Haze Removal.

    PubMed

    Li, Zhengguo; Zheng, Jinghong

    2015-12-01

    Single image haze removal is under-constrained, because the number of freedoms is larger than the number of observations. In this paper, a novel edge-preserving decomposition-based method is introduced to estimate transmission map for a haze image so as to design a single image haze removal algorithm from the Koschmiedars law without using any prior. In particular, weighted guided image filter is adopted to decompose simplified dark channel of the haze image into a base layer and a detail layer. The transmission map is estimated from the base layer, and it is applied to restore the haze-free image. The experimental results on different types of images, including haze images, underwater images, and normal images without haze, show the performance of the proposed algorithm.

  17. Improving radiation data quality of USDA UV-B monitoring and research program and evaluating UV decomposition in DayCent and its ecological impacts

    NASA Astrophysics Data System (ADS)

    Chen, Maosi

    from an improved cloud screening algorithm that utilizes an iterative rejection of cloudy points based on a decreasing tolerance of unstable optical depth behavior when calibration information is unknown. A MODTRAN radiative transfer model simulation showed the new cloud screening algorithm was capable of screening cloudy points while retaining clear-sky points. The comparison results showed that the cloud-free points determined by the new cloud screening algorithm generated significantly (56%) more and unbiased Langley offset voltages (VLOs) for both partly cloudy days and sunny days at two testing sites, Hawaii and Florida. The V¬LOs are proportional to the radiometric sensitivity. The stability of the calibration is also improved by the development of a two-stage reference channel calibration method for collocated UV-MFRSR and MFRSR instruments. Special channels where aerosol is the only contributor to total optical depth (TOD) variation (e.g. 368-nm channel) were selected and the radiative transfer model (MODTRAN) used to calculate direct normal and diffuse horizontal ratios which were used to evaluate the stability of TOD in cloud-free points. The spectral dependence of atmospheric constituents' optical properties and previously calibrated channels were used to find stable TOD points and perform Langley calibration at spectrally adjacent channels. The test of this method on the UV-B program site at Homestead, Florida (FL02) showed that the new method generated more clustered and abundant VLOs at all (UV-) MFRSR channels and potentially improved the accuracy by 2-4% at most channels and over 10% at 300-nm and 305-nm channels. In the second major part of this work, I calibrated the DayCent-UV model with ecosystem variables (e.g. soil water, live biomass), allowed maximum photodecay rate to vary with litter's initial lignin fraction in the model, and validated the optimized model with LIDET observation of remaining carbon and nitrogen at three semi-arid sites. I

  18. Hierarchical clustering of EMD based interest points for road sign detection

    NASA Astrophysics Data System (ADS)

    Khan, Jesmin; Bhuiyan, Sharif; Adhami, Reza

    2014-04-01

    This paper presents an automatic road traffic signs detection and recognition system based on hierarchical clustering of interest points and joint transform correlation. The proposed algorithm consists of the three following stages: interest points detection, clustering of those points and similarity search. At the first stage, good discriminative, rotation and scale invariant interest points are selected from the image edges based on the 1-D empirical mode decomposition (EMD). We propose a two-step unsupervised clustering technique, which is adaptive and based on two criterion. In this context, the detected points are initially clustered based on the stable local features related to the brightness and color, which are extracted using Gabor filter. Then points belonging to each partition are reclustered depending on the dispersion of the points in the initial cluster using position feature. This two-step hierarchical clustering yields the possible candidate road signs or the region of interests (ROIs). Finally, a fringe-adjusted joint transform correlation (JTC) technique is used for matching the unknown signs with the existing known reference road signs stored in the database. The presented framework provides a novel way to detect a road sign from the natural scenes and the results demonstrate the efficacy of the proposed technique, which yields a very low false hit rate.

  19. Cooperative terrain model acquisition by two point-robots in planar polygonal terrains

    SciTech Connect

    Rao, N.S.V.; Protopopescu, V.

    1994-11-29

    We address the model acquisition problem for an unknown terrain by a team of two robots. The terrain may be cluttered by a finite number of polygonal obstacles with unknown shapes and positions. The robots are point-sized and equipped with visual sensors which acquire all visible parts of the terrain by scanning from their locations. The robots communicate with each other via wireless connection. The performance is measured by the number of the sensor (scan) operations which are assumed to be the most time-consuming/expensive of all the robot operations. We employ the restricted visibility graph methods in a hierarchiacal setup. For terrains with convex obstacles, the sensing time can be halved compared to a single robot implementation. For terrains with concave corners, the performance of the algorithm depends on the number of concave regions and their depths. A hierarchical decomposition of the restricted visibility graph into 2-connected components and trees is considered. Performance for the 2-robot team is expressed in terms of sizes of 2-connected components, and the sizes and diameters of the trees. The proposed algorithm and analysis can be applied to the methods based on Voronoi diagram and trapezoidal decomposition.

  20. Iterative filtering decomposition based on local spectral evolution kernel.

    PubMed

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2012-03-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  1. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  2. Parquet decomposition calculations of the electronic self-energy

    NASA Astrophysics Data System (ADS)

    Gunnarsson, O.; Schäfer, T.; LeBlanc, J. P. F.; Merino, J.; Sangiovanni, G.; Rohringer, G.; Toschi, A.

    2016-06-01

    The parquet decomposition of the self-energy into classes of diagrams, those associated with specific scattering processes, can be exploited for different scopes. In this work, the parquet decomposition is used to unravel the underlying physics of nonperturbative numerical calculations. We show the specific example of dynamical mean field theory and its cluster extensions [dynamical cluster approximation (DCA)] applied to the Hubbard model at half-filling and with hole doping: These techniques allow for a simultaneous determination of two-particle vertex functions and self-energies and, hence, for an essentially "exact" parquet decomposition at the single-site or at the cluster level. Our calculations show that the self-energies in the underdoped regime are dominated by spin-scattering processes, consistent with the conclusions obtained by means of the fluctuation diagnostics approach [O. Gunnarsson et al., Phys. Rev. Lett. 114, 236402 (2015), 10.1103/PhysRevLett.114.236402]. However, differently from the latter approach, the parquet procedure displays important changes with increasing interaction: Even for relatively moderate couplings, well before the Mott transition, singularities appear in different terms, with the notable exception of the predominant spin channel. We explain precisely how these singularities, which partly limit the utility of the parquet decomposition and, more generally, of parquet-based algorithms, are never found in the fluctuation diagnostics procedure. Finally, by a more refined analysis, we link the occurrence of the parquet singularities in our calculations to a progressive suppression of charge fluctuations and the formation of a resonance valence bond state, which are typical hallmarks of a pseudogap state in DCA.

  3. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  4. Tipping Points

    NASA Astrophysics Data System (ADS)

    Hansen, J.

    2007-12-01

    A climate tipping point, at least as I have used the phrase, refers to a situation in which a changing climate forcing has reached a point such that little additional forcing (or global temperature change) is needed to cause large, relatively rapid, climate change. Present examples include potential loss of all Arctic sea ice and instability of the West Antarctic and Greenland ice sheets. Tipping points are characterized by ready feedbacks that amplify the effect of forcings. The notion that these may be runaway feedbacks is a misconception. However, present "unrealized" global warming, due to the climate system's thermal inertia, exacerbates the difficulty of avoiding global warming tipping points. I argue that prompt efforts to slow CO2 emissions and absolutely reduce non-CO2 forcings are both essential if we are to avoid tipping points that would be disastrous for humanity and creation, the planet as civilization knows it.

  5. Parallel algorithm for transient solid dynamics simulations using finite elements and smoothed particle hydrodynamics

    SciTech Connect

    Attaway, S.W.; Hendrickson, B.A.; Plimpton, S.J.; Swegle, J.W.; Gardner, D.R.; Vaughan, C.T.

    1997-05-01

    An efficient, scalable, parallel algorithm for treating contacts in solid mechanics has been applied to interactions between particles in smooth particle hydrodynamics (SPH). The algorithm uses three different decompositions within a single timestep: (1) a static FE-decomposition of mesh elements; (2) a dynamic SPH-decomposition of SPH particles; (3) and a dynamic contact-decomposition of contact nodes and SPH particles. The overhead cost of such a scheme is the cost of moving mesh and particle data between the decompositions. This cost turns out to be small in practice, leading to a highly load-balanced decomposition in which to perform each of the three major computational states within a timestep.

  6. Empirical modal decomposition applied to cardiac signals analysis

    NASA Astrophysics Data System (ADS)

    Beya, O.; Jalil, B.; Fauvet, E.; Laligant, O.

    2010-01-01

    In this article, we present the method of empirical modal decomposition (EMD) applied to the electrocardiograms and phonocardiograms signals analysis and denoising. The objective of this work is to detect automatically cardiac anomalies of a patient. As these anomalies are localized in time, therefore the localization of all the events should be preserved precisely. The methods based on the Fourier Transform (TFD) lose the localization property [13] and in the case of Wavelet Transform (WT) which makes possible to overcome the problem of localization, but the interpretation remains still difficult to characterize the signal precisely. In this work we propose to apply the EMD (Empirical Modal Decomposition) which have very significant properties on pseudo periodic signals. The second section describes the algorithm of EMD. In the third part we present the result obtained on Phonocardiograms (PCG) and on Electrocardiograms (ECG) test signals. The analysis and the interpretation of these signals are given in this same section. Finally, we introduce an adaptation of the EMD algorithm which seems to be very efficient for denoising.

  7. Simplified approaches to some nonoverlapping domain decomposition methods

    SciTech Connect

    Xu, Jinchao

    1996-12-31

    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  8. Adaptive integrand decomposition in parallel and orthogonal space

    NASA Astrophysics Data System (ADS)

    Mastrolia, Pierpaolo; Peraro, Tiziano; Primo, Amedeo

    2016-08-01

    We present the integrand decomposition of multiloop scattering amplitudes in parallel and orthogonal space-time dimensions, d = d ∥ + d ⊥, being d ∥ the dimension of the parallel space spanned by the legs of the diagrams. When the number n of external legs is n ≤ 4,thecorrespondingrepresentationofmultiloopintegralsexposesasubsetofintegration variables which can be easily integrated away by means of Gegenbauer polynomials orthogonality condition. By decomposing the integration momenta along parallel and orthogonal directions, the polynomial division algorithm is drastically simplified. Moreover, the orthogonality conditions of Gegenbauer polynomials can be suitably applied to integrate the decomposed integrand, yielding the systematic annihilation of spurious terms. Consequently, multiloop amplitudes are expressed in terms of integrals corresponding to irreducible scalar products of loop momenta and external ones. We revisit the one-loop decomposition, which turns out to be controlled by the maximum-cut theorem in different dimensions, and we discuss the integrand reduction of two-loop planar and non-planar integrals up to n = 8 legs, for arbitrary external and internal kinematics. The proposed algorithm extends to all orders in perturbation theory.

  9. Interpolation algorithms for machine tools

    SciTech Connect

    Burleson, R.R.

    1981-08-01

    There are three types of interpolation algorithms presently used in most numerical control systems: digital differential analyzer, pulse-rate multiplier, and binary-rate multiplier. A method for higher order interpolation is in the experimental stages. The trends point toward the use of high-speed micrprocessors to perform these interpolation algorithms.

  10. Simple algorithm for computing the geometric measure of entanglement

    SciTech Connect

    Streltsov, Alexander; Kampermann, Hermann; Bruss, Dagmar

    2011-08-15

    We present an easy implementable algorithm for approximating the geometric measure of entanglement from above. The algorithm can be applied to any multipartite mixed state. It involves only the solution of an eigenproblem and finding a singular value decomposition; no further numerical techniques are needed. To provide examples, the algorithm was applied to the isotropic states of three qubits and the three-qubit XX model with external magnetic field.

  11. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  12. Amino Acid Free Energy Decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Fairchild, Michael; Livesay, Dennis; Jacobs, Donald

    2009-03-01

    The Distance Constraint Model (DCM) describes protein thermodynamics at a coarse-grained level based on a Free Energy Decomposition (FED) that assigns energy and entropy contributions to specific molecular interactions. Application of constraint theory accounts for non-additivity in conformational entropy so that the total free energy of a system can be reconstituted from all its molecular parts. In prior work, a minimal DCM utilized a simple FED involving temperature-independent parameters indiscriminately applied to all residues. Here, we describe a residue-specific FED that depends on local conformational states. The FED of an amino acid is constructed by weighting the energy spectrums associated with local energy minimums in configuration space by absolute entropies estimated using a quasi-harmonic approximation. Interesting temperature-dependent behavior is found. Support is from NIH R01 GM073082 and a CRI postdoctoral Duke research fellowship for H. Wang.

  13. Metallo-organic decomposition films

    NASA Technical Reports Server (NTRS)

    Gallagher, B. D.

    1985-01-01

    A summary of metallo-organic deposition (MOD) films for solar cells was presented. The MOD materials are metal ions compounded with organic radicals. The technology is evolving quickly for solar cell metallization. Silver compounds, especially silver neodecanoate, were developed which can be applied by thick-film screening, ink-jet printing, spin-on, spray, or dip methods. Some of the advantages of MOD are: high uniform metal content, lower firing temperatures, decomposition without leaving a carbon deposit or toxic materials, and a film that is stable under ambient conditions. Molecular design criteria were explained along with compounds formulated to date, and the accompanying reactions for these compounds. Phase stability and the other experimental and analytic results of MOD films were presented.

  14. Spectral decomposition of phosphorescence decays.

    PubMed

    Fuhrmann, N; Brübach, J; Dreizler, A

    2013-11-01

    In phosphor thermometry, the fitting of decay curves is a key task in the robust and precise determination of temperatures. These decays are generally assumed to be mono-exponential in certain temporal boundaries, where fitting is performed. The present study suggests a multi-exponential method to determine the spectral distribution in terms of decay times in order to analyze phosphorescence decays and thereby complement the mono-exponential analysis. Therefore, two methods of choice are compared and verified using simulated data in the presence of noise. Addtionally, this spectral decomposition is applied to the thermographic phosphor Mg4FGeO6 : Mn and reveals changes in the exponential distributions of decay times upon a change of the excitation laser energy.

  15. Domain decomposition methods in aerodynamics

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Saltz, Joel

    1990-01-01

    Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.

  16. A global HMX decomposition model

    SciTech Connect

    Hobbs, M.L.

    1996-12-01

    HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) decomposes by competing reaction pathways to form various condensed and gas-phase intermediate and final products. Gas formation is related to the development of nonuniform porosity and high specific surface areas prior to ignition in cookoff events. Such thermal damage enhances shock sensitivity and favors self-supported accelerated burning. The extent of HMX decomposition in highly confined cookoff experiments remains a major unsolved experimental and modeling problem. The present work is directed at determination of global HMX kinetics useful for predicting the elapsed time to thermal runaway (ignition) and the extent of decomposition at ignition. Kinetic rate constants for a six step engineering based global mechanism were obtained using gas formation rates measured by Behrens at Sandia National Laboratories with his Simultaneous Modulated Beam Mass Spectrometer (STMBMS) experimental apparatus. The six step global mechanism includes competition between light gas (H[sub 2]Awe, HCN, CO, H[sub 2]CO, NO, N[sub 2]Awe) and heavy gas (C[sub 2]H[sub 6]N[sub 2]Awe and C[sub 4]H[sub 10]N0[sub 2]) formation with zero order sublimation of HMX and the mononitroso analog of HMX (mn-HMX), C[sub 4]H[sub 8]N[sub 8]Awe[sub 7]. The global mechanism was applied to the highly confined, One Dimensional Time to eXplosion (ODTX) experiment and hot cell experiments by suppressing the sublimation of HMX and mn-HMX. An additional gas-phase reaction was also included to account for the gas-phase reaction of N[sub 2]Awe with H[sub 2]CO. Predictions compare adequately to the STMBMS data, ODTX data, and hot cell data. Deficiencies in the model and future directions are discussed.

  17. Regular Decompositions for H(div) Spaces

    SciTech Connect

    Kolev, Tzanio; Vassilevski, Panayot

    2012-01-01

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  18. Chinese Orthographic Decomposition and Logographic Structure

    ERIC Educational Resources Information Center

    Cheng, Chao-Ming; Lin, Shan-Yuan

    2013-01-01

    "Chinese orthographic decomposition" refers to a sense of uncertainty about the writing of a well-learned Chinese character following a prolonged inspection of the character. This study investigated the decomposition phenomenon in a test situation in which Chinese characters were repeatedly presented in a word context and assessed…

  19. Metallo-Organic Decomposition (MOD) film development

    NASA Technical Reports Server (NTRS)

    Parker, J.

    1986-01-01

    The processing techniques and problems encountered in formulating metallo-organic decomposition (MOD) films used in contracting structures for thin solar cells are described. The use of thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) techniques performed at Jet Propulsion Laboratory (JPL) in understanding the decomposition reactions lead to improvements in process procedures. The characteristics of the available MOD films were described in detail.

  20. Sampling Stoichiometry: The Decomposition of Hydrogen Peroxide.

    ERIC Educational Resources Information Center

    Clift, Philip A.

    1992-01-01

    Describes a demonstration of the decomposition of hydrogen peroxide to provide an interesting, quantitative illustration of the stoichiometric relationship between the decomposition of hydrogen peroxide and the formation of oxygen gas. This 10-minute demonstration uses ordinary hydrogen peroxide and yeast that can be purchased in a supermarket.…

  1. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  2. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  3. English and Turkish Pupils' Understanding of Decomposition

    ERIC Educational Resources Information Center

    Cetin, Gulcan

    2007-01-01

    This study aimed to describe seventh grade English and Turkish students' levels of understanding of decomposition. Data were analyzed descriptively from the students' written responses to four diagnostic questions about decomposition. Results revealed that the English students had considerably higher sound understanding and lower no understanding…

  4. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  5. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  6. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  7. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  8. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  9. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  10. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  11. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  12. GLAS Spacecraft Pointing Study

    NASA Technical Reports Server (NTRS)

    Born, George H.; Gold, Kenn; Ondrey, Michael; Kubitschek, Dan; Axelrad, Penina; Komjathy, Attila

    1998-01-01

    Science requirements for the GLAS mission demand that the laser altimeter be pointed to within 50 m of the location of the previous repeat ground track. The satellite will be flown in a repeat orbit of 182 days. Operationally, the required pointing information will be determined on the ground using the nominal ground track, to which pointing is desired, and the current propagated orbit of the satellite as inputs to the roll computation algorithm developed by CCAR. The roll profile will be used to generate a set of fit coefficients which can be uploaded on a daily basis and used by the on-board attitude control system. In addition, an algorithm has been developed for computation of the associated command quaternions which will be necessary when pointing at targets of opportunity. It may be desirable in the future to perform the roll calculation in an autonomous real-time mode on-board the spacecraft. GPS can provide near real-time tracking of the satellite, and the nominal ground track can be stored in the on-board computer. It will be necessary to choose the spacing of this nominal ground track to meet storage requirements in the on-board environment. Several methods for generating the roll profile from a sparse reference ground track are presented.

  13. Theoretical study of the decomposition pathways and products of C5- perfluorinated ketone (C5 PFK)

    NASA Astrophysics Data System (ADS)

    Fu, Yuwei; Wang, Xiaohua; Li, Xi; Yang, Aijun; Han, Guohui; Lu, Yanhui; Wu, Yi; Rong, Mingzhe

    2016-08-01

    Due to the high global warming potential (GWP) and increasing environmental concerns, efforts on searching the alternative gases to SF6, which is predominantly used as insulating and interrupting medium in high-voltage equipment, have become a hot topic in recent decades. Overcoming the drawbacks of the existing candidate gases, C5- perfluorinated ketone (C5 PFK) was reported as a promising gas with remarkable insulation capacity and the low GWP of approximately 1. Experimental measurements of the dielectric strength of this novel gas and its mixtures have been carried out, but the chemical decomposition pathways and products of C5 PFK during breakdown are still unknown, which are the essential factors in evaluating the electric strength of this gas in high-voltage equipment. Therefore, this paper is devoted to exploring all the possible decomposition pathways and species of C5 PFK by density functional theory (DFT). The structural optimizations, vibrational frequency calculations and energy calculations of the species involved in a considered pathway were carried out with DFT-(U)B3LYP/6-311G(d,p) method. Detailed potential energy surface was then investigated thoroughly by the same method. Lastly, six decomposition pathways of C5 PFK decomposition involving fission reactions and the reactions with a transition states were obtained. Important intermediate products were also determined. Among all the pathways studied, the favorable decomposition reactions of C5 PFK were found, involving C-C bond ruptures producing Ia and Ib in pathway I, followed by subsequent C-C bond ruptures and internal F atom transfers in the decomposition of Ia and Ib presented in pathways II + III and IV + V, respectively. Possible routes were pointed out in pathway III and lead to the decomposition of IIa, which is the main intermediate product found in pathway II of Ia decomposition. We also investigated the decomposition of Ib, which can undergo unimolecular reactions to give the formation

  14. Morphological Decomposition in Reading Hebrew Homographs.

    PubMed

    Miller, Paul; Liran-Hazan, Batel; Vaknin, Vered

    2016-06-01

    The present work investigates whether and how morphological decomposition processes bias the reading of Hebrew heterophonic homographs, i.e., unique orthographic patterns that are associated with two separate phonological, semantic entities depicted by means of two morphological structures (linear and nonlinear). In order to reveal the nature of morphological processes involved in the reading of Hebrew homographs, we tested 146 university students with three computerized experiments, each experiment focusing on a different level of processing. Participants were divided into three experimental groups given that the three experiments used the same stimulus lists. Evidence obtained from the analysis of the participants' processing time and processing accuracy points to a propensity to process heterophonic homographs by default as morpho-syntactically simple rather than complex words. Findings are discussed with reference to assumptions made by Dual-Route models regarding the importance of morphological knowledge in fast and accurate access of written words' representations which mediate the retrieval of their meanings with direct reference to the context in which they occur.

  15. Thermal decomposition of bioactive sodium titanate surfaces

    NASA Astrophysics Data System (ADS)

    Ravelingien, Matthieu; Mullens, Steven; Luyten, Jan; Meynen, Vera; Vinck, Evi; Vervaet, Chris; Remon, Jean Paul

    2009-09-01

    Alkali-treated orthopaedic titanium surfaces have earlier shown to induce apatite deposition. A subsequent heat treatment under air improved the adhesion of the sodium titanate layer but decreased the rate of apatite deposition. Furthermore, insufficient attention was paid to the sensitivity of titanium substrates to oxidation and nitriding during heat treatment under air. Therefore, in the present study, alkali-treated titanium samples were heat-treated under air, argon flow or vacuum. The microstructure and composition of their surfaces were characterized to clarify what mechanism is responsible for inhibiting in vitro calcium phosphate deposition after heat treatment. All heat treatments under various atmospheres turned out to be detrimental for apatite deposition. They led to the thermal decomposition of the dense sodium titanate basis near the interface with the titanium substrate. Depending on the atmosphere, several forms of Ti yO z were formed and Na 2O was sublimated. Consequently, less exchangeable sodium ions remained available. This pointed to the importance of the ion exchange capacity of the sodium titanate layer for in vitro bioactivity.

  16. Decomposition approach to model smart suspension struts

    NASA Astrophysics Data System (ADS)

    Song, Xubin

    2008-10-01

    Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.

  17. Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition

    SciTech Connect

    Zhang, Zheng; Yang, Xiu; Oseledets, Ivan; Karniadakis, George E.; Daniel, Luca

    2015-01-31

    Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.

  18. A domain decomposition method for modelling Stokes flow in porous materials

    NASA Astrophysics Data System (ADS)

    Liu, Guangli; Thompson, Karsten E.

    2002-04-01

    An algorithm is presented for solving the Stokes equation in large disordered two-dimensional porous domains. In this work, it is applied to random packings of discs, but the geometry can be essentially arbitrary. The approach includes the subdivision of the domain and a subsequent application of boundary integral equations to the subdomains. This gives a block diagonal matrix with sparse off-block components that arise from shared variables on internal subdomain boundaries. The global problem is solved using a biconjugate gradient routine with preconditioning. Results show that the effectiveness of the preconditioner is strongly affected by the subdomain structure, from which a methodology is proposed for the domain decomposition step. A minimum is observed in the solution time versus subdomain size, which is governed by the time required for preconditioning, the time for vector multiplications in the biconjugate gradient routine, the iterative convergence rate and issues related to memory allocation. The method is demonstrated on various domains including a random 1000-particle domain. The solution can be used for efficient recovery of point velocities, which is discussed in the context of stochastic modelling of solute transport. Copyright

  19. Tipping Point

    MedlinePlus

    ... Tipping Point by CPSC Blogger September 22 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash format. Almost weekly, we see ...

  20. A unified statistical framework for material decomposition using multienergy photon counting x-ray detectors

    SciTech Connect

    Choi, Jiyoung; Kang, Dong-Goo; Kang, Sunghoon; Sung, Younghun; Ye, Jong Chul

    2013-09-15

    Purpose: Material decomposition using multienergy photon counting x-ray detectors (PCXD) has been an active research area over the past few years. Even with some success, the problem of optimal energy selection and three material decomposition including malignant tissue is still on going research topic, and more systematic studies are required. This paper aims to address this in a unified statistical framework in a mammographic environment.Methods: A unified statistical framework for energy level optimization and decomposition of three materials is proposed. In particular, an energy level optimization algorithm is derived using the theory of the minimum variance unbiased estimator, and an iterative algorithm is proposed for material composition as well as system parameter estimation under the unified statistical estimation framework. To verify the performance of the proposed algorithm, the authors performed simulation studies as well as real experiments using physical breast phantom and ex vivo breast specimen. Quantitative comparisons using various performance measures were conducted, and qualitative performance evaluations for ex vivo breast specimen were also performed by comparing the ground-truth malignant tissue areas identified by radiologists.Results: Both simulation and real experiments confirmed that the optimized energy bins by the proposed method allow better material decomposition quality. Moreover, for the specimen thickness estimation errors up to 2 mm, the proposed method provides good reconstruction results in both simulation and real ex vivo breast phantom experiments compared to existing methods.Conclusions: The proposed statistical framework of PCXD has been successfully applied for the energy optimization and decomposition of three material in a mammographic environment. Experimental results using the physical breast phantom and ex vivo specimen support the practicality of the proposed algorithm.

  1. Multilinear operators for higher-order decompositions.

    SciTech Connect

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  2. Thermal decomposition of magnesium and calcium sulfates

    SciTech Connect

    Roche, S L

    1982-04-01

    The effect of catalyst on the thermal decomposition of MgSO/sub 4/ and CaSO/sub 4/ in vacuum was studied as a function of time in Knudsen cells and for MgSO/sub 4/, in open crucibles in vacuum in a Thermal Gravimetric Apparatus. Platinum and Fe/sub 2/O/sub 3/ were used as catalysts. The CaSO/sub 4/ decomposition rate was approximately doubled when Fe/sub 2/O/sub 3/ was present in a Knudsen cell. Platinum did not catalyze the CaSO/sub 4/ decomposition reaction. The initial decomposition rate for MgSO/sub 4/ was approximately 5 times greater than when additives were present in Knudsen cells but only about 1.5 times greater when decomposition was done in an open crucible.

  3. Factors controlling bark decomposition and its role in wood decomposition in five tropical tree species

    PubMed Central

    Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.

    2016-01-01

    Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461

  4. Management intensity alters decomposition via biological pathways

    USGS Publications Warehouse

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future

  5. Mode Decomposition Methods for Soil Moisture Prediction

    NASA Astrophysics Data System (ADS)

    Jana, R. B.; Efendiev, Y. R.; Mohanty, B.

    2014-12-01

    Lack of reliable, well-distributed, long-term datasets for model validation is a bottle-neck for most exercises in soil moisture analysis and prediction. Understanding what factors drive soil hydrological processes at different scales and their variability is very critical to further our ability to model the various components of the hydrologic cycle more accurately. For this, a comprehensive dataset with measurements across scales is very necessary. Intensive fine-resolution sampling of soil moisture over extended periods of time is financially and logistically prohibitive. Installation of a few long term monitoring stations is also expensive, and needs to be situated at critical locations. The concept of Time Stable Locations has been in use for some time now to find locations that reflect the mean values for the soil moisture across the watershed under all wetness conditions. However, the soil moisture variability across the watershed is lost when measuring at only time stable locations. We present here a study using techniques such as Dynamic Mode Decomposition (DMD) and Discrete Empirical Interpolation Method (DEIM) that extends the concept of time stable locations to arrive at locations that provide not simply the average soil moisture values for the watershed, but also those that can help re-capture the dynamics across all locations in the watershed. As with the time stability, the initial analysis is dependent on an intensive sampling history. The DMD/DEIM method is an application of model reduction techniques for non-linearly related measurements. Using this technique, we are able to determine the number of sampling points that would be required for a given accuracy of prediction across the watershed, and the location of those points. Locations with higher energetics in the basis domain are chosen first. We present case studies across watersheds in the US and India. The technique can be applied to other hydro-climates easily.

  6. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  7. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  8. Ground point filtering of UAV-based photogrammetric point clouds

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Seijmonsbergen, Arie; Masselink, Rens; Keesstra, Saskia

    2016-04-01

    Unmanned Aerial Vehicles (UAVs) have proved invaluable for generating high-resolution and multi-temporal imagery. Based on photographic surveys, 3D surface reconstructions can be derived photogrammetrically so producing point clouds, orthophotos and surface models. For geomorphological or ecological applications it may be necessary to separate ground points from vegetation points. Existing filtering methods are designed for point clouds derived using other methods, e.g. laser scanning. The purpose of this paper is to test three filtering algorithms for the extraction of ground points from point clouds derived from low-altitude aerial photography. Three subareas were selected from a single flight which represent different scenarios: 1) low relief, sparsely vegetated area, 2) low relief, moderately vegetated area, 3) medium relief and moderately vegetated area. The three filtering methods are used to classify ground points in different ways, based on 1) RGB color values from training samples, 2) TIN densification as implemented in LAStools, and 3) an iterative surface lowering algorithm. Ground points are then interpolated into a digital terrain model using inverse distance weighting. The results suggest that different landscapes require different filtering methods for optimal ground point extraction. While iterative surface lowering and TIN densification are fully automated, color-based classification require fine-tuning in order to optimize the filtering results. Finally, we conclude that filtering photogrammetric point clouds could provide a cheap alternative to laser scan surveys for creating digital terrain models in sparsely vegetated areas.

  9. Compiling quantum algorithms for architectures with multi-qubit gates

    NASA Astrophysics Data System (ADS)

    Martinez, Esteban A.; Monz, Thomas; Nigg, Daniel; Schindler, Philipp; Blatt, Rainer

    2016-06-01

    In recent years, small-scale quantum information processors have been realized in multiple physical architectures. These systems provide a universal set of gates that allow one to implement any given unitary operation. The decomposition of a particular algorithm into a sequence of these available gates is not unique. Thus, the fidelity of the implementation of an algorithm can be increased by choosing an optimized decomposition into available gates. Here, we present a method to find such a decomposition, where a small-scale ion trap quantum information processor is used as an example. We demonstrate a numerical optimization protocol that minimizes the number of required multi-qubit entangling gates by design. Furthermore, we adapt the method for state preparation, and quantum algorithms including in-sequence measurements.

  10. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  11. Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination

    NASA Technical Reports Server (NTRS)

    Ryne, Mark S.; Wang, Tseng-Chan

    1991-01-01

    An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.

  12. Problem decomposition by mutual information and force-based clustering

    NASA Astrophysics Data System (ADS)

    Otero, Richard Edward

    alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.

  13. Univariate time series forecasting algorithm validation

    NASA Astrophysics Data System (ADS)

    Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan

    2014-12-01

    Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.

  14. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  15. Data decomposition of Monte Carlo particle transport simulations via tally servers

    SciTech Connect

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord

    2013-11-01

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

  16. Decomposition of cellulose by ultrasonic welding in water

    NASA Astrophysics Data System (ADS)

    Nomura, Shinfuku; Miyagawa, Seiya; Mukasa, Shinobu; Toyota, Hiromichi

    2016-07-01

    The use of ultrasonic welding in water to decompose cellulose placed in water was examined experimentally. Filter paper was used as the decomposition material with a horn-type transducer 19.5 kHz adopted as the ultrasonic welding power source. The frictional heat at the point where the surface of the tip of the ultrasonic horn contacts the filter paper decomposes the cellulose in the filter paper into 5-hydroxymethylfurfural (5-HMF), furfural, and oligosaccharide through hydrolysis and thermolysis that occurs in the welding process.

  17. [Research on the surface electromyography signal decomposition based on multi-channel signal fusion analysis].

    PubMed

    Li, Qiang; Yang, Jihai

    2012-10-01

    The decomposition method of surface electromyography (sEMG) signals was explored by using the multi-channel information extraction and fusion analysis to acquire the motor unit action potential (MUAP) patterns. The action potential waveforms were detected with the combined method of continuous wavelet transform and hypothesis testing, and the effective detection analysis was judged with the multi-channel firing processes of motor units. The cluster number of MUAPs was confirmed by the hierarchical clustering technique, and then the decomposition was implemented by the fuzzy k-means clustering algorithms. The unclassified waveforms were processed by the template matching and peel-off methods. The experimental results showed that several kinds of MUAPs were precisely extracted from the multi-channel sEMG signals. The space potential distribution information of motor units could be satisfyingly represented by the proposed decomposition method. PMID:23198440

  18. Non-conformal domain decomposition methods for time-harmonic Maxwell equations.

    PubMed

    Shao, Yang; Peng, Zhen; Lim, Kheng Hwee; Lee, Jin-Fa

    2012-09-01

    We review non-conformal domain decomposition methods (DDMs) and their applications in solving electrically large and multi-scale electromagnetic (EM) radiation and scattering problems. In particular, a finite-element DDM, together with a finite-element tearing and interconnecting (FETI)-like algorithm, incorporating Robin transmission conditions and an edge corner penalty term, are discussed in detail. We address in full the formulations, and subsequently, their applications to problems with significant amounts of repetitions. The non-conformal DDM approach has also been extended into surface integral equation methods. We elucidate a non-conformal integral equation domain decomposition method and a generalized combined field integral equation method for modelling EM wave scattering from non-penetrable and penetrable targets, respectively. Moreover, a plane wave scattering from a composite mockup fighter jet has been simulated using the newly developed multi-solver domain decomposition method. PMID:22870061

  19. Phase-context decomposition of diagonal unitaries for higher-dimensional systems

    NASA Astrophysics Data System (ADS)

    Beer, Kerstin; Dziemba, Friederike Anna

    2016-05-01

    We generalize the efficient decomposition method for phase-sparse diagonal operators of J. Welch et al. [Quantum Info. Comput. 16, 87 (2016)] to qudit systems. The phase-context-aware method focuses on cascaded entanglers, whose decomposition into multicontrolled inc gates can be optimized by the choice of a proper signed base-d representation for the natural numbers. While the gate count of the best-known decomposition method for general diagonal operators on qubit systems scales with O (2n) , the circuits synthesized by the Welch algorithm for diagonal operators with k distinct phases are upper-bounded by O (n2k ) , which is generalized to O (d n2k ) for the qudit case in this paper.

  20. Non-conformal domain decomposition methods for time-harmonic Maxwell equations

    PubMed Central

    Shao, Yang; Peng, Zhen; Lim, Kheng Hwee; Lee, Jin-Fa

    2012-01-01

    We review non-conformal domain decomposition methods (DDMs) and their applications in solving electrically large and multi-scale electromagnetic (EM) radiation and scattering problems. In particular, a finite-element DDM, together with a finite-element tearing and interconnecting (FETI)-like algorithm, incorporating Robin transmission conditions and an edge corner penalty term, are discussed in detail. We address in full the formulations, and subsequently, their applications to problems with significant amounts of repetitions. The non-conformal DDM approach has also been extended into surface integral equation methods. We elucidate a non-conformal integral equation domain decomposition method and a generalized combined field integral equation method for modelling EM wave scattering from non-penetrable and penetrable targets, respectively. Moreover, a plane wave scattering from a composite mockup fighter jet has been simulated using the newly developed multi-solver domain decomposition method. PMID:22870061

  1. Layout decomposition of self-aligned double patterning for 2D random logic patterning

    NASA Astrophysics Data System (ADS)

    Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.

    2011-04-01

    Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.

  2. Domain decomposition methods for parallel laser-tissue models with Monte Carlo transport

    SciTech Connect

    Alme, H.J.; Rodrique, G.; Zimmerman, G.

    1998-10-19

    Achieving parallelism in simulations that use Monte Carlo transport methods presents interesting challenges. For problems that require domain decomposition, load balance can be harder to achieve. The Monte Carlo transport package may have to operate with other packages that have different optimal domain decompositions for a given problem. To examine some of these issues, we have developed a code that simulates the interaction of a laser with biological tissue; it uses a Monte Carlo method to simulate the laser and a finite element model to simulate the conduction of the temperature field in the tissue. We will present speedup and load balance results obtained for a suite of problems decomposed using a few domain decomposition algorithms we have developed.

  3. The effect of body size on the rate of decomposition in a temperate region of South Africa.

    PubMed

    Sutherland, A; Myburgh, J; Steyn, M; Becker, P J

    2013-09-10

    Forensic anthropologists rely on the state of decomposition of a body to estimate the post-mortem-interval (PMI) which provides information about the natural events and environmental forces that could have affected the remains after death. Various factors are known to influence the rate of decomposition, among them temperature, rainfall and exposure of the body. However, conflicting reports appear in the literature on the effect of body size on the rate of decay. The aim of this project was to compare decomposition rates of large pigs (Sus scrofa; 60-90 kg), with that of small pigs (<35 kg), to assess the influence of body size on decomposition rates. For the decomposition rates of small pigs, 15 piglets were assessed three times per week over a period of three months during spring and early summer. Data collection was conducted until complete skeletonization occurred. Stages of decomposition were scored according to separate categories for each anatomical region, and the point values for each region were added to determine the total body score (TBS), which represents the overall stage of decomposition for each pig. For the large pigs, data of 15 pigs were used. Scatter plots illustrating the relationships between TBS and PMI as well as TBS and accumulated degree days (ADD) were used to assess the pattern of decomposition and to compare decomposition rates between small and large pigs. Results indicated that rapid decomposition occurs during the early stages of decomposition for both samples. Large pigs showed a plateau phase in the course of advanced stages of decomposition, during which decomposition was minimal. A similar, but much shorter plateau was reached by small pigs of >20 kg at a PMI of 20-25 days, after which decomposition commenced swiftly. This was in contrast to the small pigs of <20 kg, which showed no plateau phase and their decomposition rates were swift throughout the duration of the study. Overall, small pigs decomposed 2.82 times faster than

  4. Iterative most likely oriented point registration.

    PubMed

    Billings, Seth; Taylor, Russell

    2014-01-01

    A new algorithm for model based registration is presented that optimizes both position and surface normal information of the shapes being registered. This algorithm extends the popular Iterative Closest Point (ICP) algorithm by incorporating the surface orientation at each point into both the correspondence and registration phases of the algorithm. For the correspondence phase an efficient search strategy is derived which computes the most probable correspondences considering both position and orientation differences in the match. For the registration phase an efficient, closed-form solution provides the maximum likelihood rigid body alignment between the oriented point matches. Experiments by simulation using human femur data demonstrate that the proposed Iterative Most Likely Oriented Point (IMLOP) algorithm has a strong accuracy advantage over ICP and has increased ability to robustly identify a successful registration result.

  5. Thermal Decomposition Kinetics of HMX

    SciTech Connect

    Burnham, A K; Weese, R K

    2004-11-18

    Nucleation-growth kinetic expressions are derived for thermal decomposition of HMX from a variety of thermal analysis data types, including mass loss for isothermal and constant rate heating in an open pan and heat flow for isothermal and constant rate heating in open and closed pans. Conditions are identified in which thermal runaway is small to nonexistent, which typically means temperatures less than 255 C and heating rates less than 1 C/min. Activation energies are typically in the 140 to 165 kJ/mol range for open pan experiments and about 150 to 165 kJ/mol for sealed pan experiments. Our activation energies tend to be slightly lower than those derived from data supplied by the University of Utah, which we consider the best previous thermal analysis work. The reaction clearly displays more than one process, and most likely three processes, which are most clearly evident in open pan experiments. The reaction is accelerated in closed pan experiments, and one global reaction appears to fit the data well. Comparison of our rate measurements with additional literature sources for open and closed low temperature pyrolysis from Sandia gives a likely activation energy of 165 kJ/mol at 10% conversion.

  6. Thermal Decomposition Kinetics of HMX

    SciTech Connect

    Burnham, A K; Weese, R K

    2005-03-17

    Nucleation-growth kinetic expressions are derived for thermal decomposition of HMX from a variety of types of data, including mass loss for isothermal and constant rate heating in an open pan, and heat flow for isothermal and constant rate heating in open and closed pans. Conditions are identified in which thermal runaway is small to nonexistent, which typically means temperatures less than 255 C and heating rates less than 1 C/min. Activation energies are typically in the 140 to 165 kJ/mol regime for open pan experiments and about 150-165 kJ/mol for sealed-pan experiments. The reaction clearly displays more than one process, and most likely three processes, which are most clearly evident in open pan experiments. The reaction is accelerated for closed pan experiments, and one global reaction fits the data fairly well. Our A-E values lie in the middle of the values given in a compensation-law plot by Brill et al. (1994). Comparison with additional open and closed low temperature pyrolysis experiments support an activation energy of 165 kJ/mol at 10% conversion.

  7. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  8. Fast Optimal Load Balancing Algorithms for 1D Partitioning

    SciTech Connect

    Pinar, Ali; Aykanat, Cevdet

    2002-12-09

    One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.

  9. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    whether LASIS and LAMIS image data, the traditional MCA algorithm can separate the interference stripes signals and background signals very well, and make the interference hyperspectral image decomposition perfectly, and the improved MCA algorithm not only keep the perfect results of the traditional MCA algorithm, but also can reduce the times of iteration and meet the iterative convergence conditions much faster than the traditional MCA algorithm, which will also provide a very good solution for the new theory of compressive sensing. PMID:27228777

  10. [Pathway of aqueous ferric hydroxide catalyzed ozone decomposition and ozonation of trace nitrobenzene].

    PubMed

    Ma, Jun; Zhang, Tao; Chen, Zhong-lin; Sui, Ming-hao; Li, Xue-yan

    2005-03-01

    In this paper, the decomposition rate of ozone in water was measured over GAC and ferric hydroxide/GAC (FeOOH/GAC) catalyst and the mechanism of ozone catalytic decomposition was discussed. The catalytic ozonation activity of trace nitrobenzene in water was determined on several metal oxides and correlated with their surface density of hydroxyl groups and pHzpc,(pH of zero point of charge). The results show that: 1) The pseudo-first order rate of ozone decomposition increased by 68 and 108 percent for GAC and FeOOH/GAC catalysts respectively; 2) When t-butanol was added, the rate constant decreased by 9 % for GAC and 20% for FeOOH/GAC; 3) There was no direct correlation between surface density of hydroxyl groups and the activity of catalytic ozonation of nitrobenzene; 4) The oxide surface at nearly zero charged point was favorable for the catalytic ozonation of nitrobenzene.

  11. Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation

    SciTech Connect

    Smolinski, B.

    1999-09-03

    The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught in a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.

  12. Point-based manifold harmonics.

    PubMed

    Liu, Yang; Prabhakaran, Balakrishnan; Guo, Xiaohu

    2012-10-01

    This paper proposes an algorithm to build a set of orthogonal Point-Based Manifold Harmonic Bases (PB-MHB) for spectral analysis over point-sampled manifold surfaces. To ensure that PB-MHB are orthogonal to each other, it is necessary to have symmetrizable discrete Laplace-Beltrami Operator (LBO) over the surfaces. Existing converging discrete LBO for point clouds, as proposed by Belkin et al., is not guaranteed to be symmetrizable. We build a new point-wisely discrete LBO over the point-sampled surface that is guaranteed to be symmetrizable, and prove its convergence. By solving the eigen problem related to the new operator, we define a set of orthogonal bases over the point cloud. Experiments show that the new operator is converging better than other symmetrizable discrete Laplacian operators (such as graph Laplacian) defined on point-sampled surfaces, and can provide orthogonal bases for further spectral geometric analysis and processing tasks.

  13. QRS detection by lifting scheme constructing multi-resolution morphological decomposition.

    PubMed

    Zhang, Pu; Ma, Heather T; Zhang, Qinyu

    2014-01-01

    QRS complex detecting algorithm is core of ECG auto-diagnosis method and deeply influences cardiac cycle division for signal compression. However, ECG signals collected by noninvasive surface electrodes areusually mixed with several kinds of interference, and its waveform variation is the main reason for the hard realization of ECG processing. This paper proposes a QRS complex detecting algorithm based on multi-resolution mathematical morphological decomposition. This algorithm possesses superiorities in R peak detection of both mathematical morphological method and multi-resolution decomposition. Moreover, a lifting constructing method with Maximizationupdating operator is adopted to further improve the algorithm performance. And an efficient R peak search-back algorithm is employed to reduce the false positives (FP) and false negatives (FN). The proposed algorithm provides a good performance applying to MIT-BIH Arrhythmia Database, and achieves over 99% detection rate, sensitivity and positive predictivity, respectively, and calculation burden is low. Therefore, the proposed method is appropriate for portable medical devices in Telemedicine system. PMID:25569905

  14. Impact of heavy metals on mass and energy flux within the decomposition process in deciduous forests.

    PubMed

    Köhler, H R; Wein, C; Reiss, S; Storch, V; Alberti, G

    1995-04-01

    : Laboratory experiments on microbial decomposition and on the contribution of diplopods to organic matter decomposition in soil were combined with field studies to reveal the major points of heavy metal effects on the leaf litter decomposition process. The study focused on the accumulation of organic litter material in heavy metal-contaminated soils. Microbial decomposition of freshly fallen leaves remained quantitatively unaffected by artificial lead contamination (1000 mg kg(-1)). The same was true for further decomposed leaf litter material, provided that the breakdown of this material was not influenced by faunal components. Although nutrient absorption in diplopods is affected by high lead contents in the food, this effect alone, however, was shown not to be sufficient for the massive deceleration of the decomposition process under heavy metal influence which could not only be observed in the field but occurred in microcosm studies as well. Reduced reproduction and lower activity of the diplopods most likely were responsible for the observation that lead-influenced diplopods enhanced microbial activity in soil only in a lesser degree than uncontaminated animals did. This effect is assigned to represent the main reason for decreased decomposition rates and the subsequent accumulation of organic material in heavy metal-contaminated soils.

  15. Pointing the SOFIA Telescope

    NASA Astrophysics Data System (ADS)

    Gross, M. A. K.; Rasmussen, J. J.; Moore, E. M.

    2010-12-01

    SOFIA is an airborne, gyroscopically stabilized 2.5m infrared telescope, mounted to a spherical bearing. Unlike its predecessors, SOFIA will work in absolute coordinates, despite its continually changing position and attitude. In order to manage this, SOFIA must relate equatorial and telescope coordinates using a combination of avionics data and star identification, manage field rotation and track sky images. We describe the algorithms and systems required to acquire and maintain the equatorial reference frame, relate it to tracking imagers and the science instrument, set up the oscillating secondary mirror, and aggregate pointings into relocatable nods and dithers.

  16. New iterative gridding algorithm using conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Jiang, Xuguang; Thedens, Daniel

    2004-05-01

    Non-uniformly sampled data in MRI applications must be interpolated onto a regular Cartesian grid to perform fast image reconstruction using FFT. The conventional method for this is gridding, which requires a density compensation function (DCF). The calculation of DCF may be time-consuming, ambiguously defined, and may not be always reusable due to changes in k-space trajectories. A recently proposed reconstruction method that eliminates the requirement of DCF is block uniform resampling (BURS) which uses singular value decomposition (SVD). However, the SVD is still computationally intensive. In this work, we present a modified BURS algorithm using conjugate gradient method (CGM) in place of direct SVD calculation. Calculation of a block of grid point values in each iteration further reduces the computational load. The new method reduces the calculation complexity while maintaining a high-quality reconstruction result. For an n-by-n matrix, the time complexity per iteration is reduced from O(n*n*n) in SVD to O(n*n) in CGM. The time can be further reduced when we stop the iteration in CGM earlier according to the norm of the residual vector. Using this method, the quality of the reconstructed image improves compared to regularized BURS. The reduced time complexity and improved reconstruction result make the new algorithm promising in dealing with large-sized images and 3D images.

  17. Orthogonal least squares learning algorithm for radial basis function networks.

    PubMed

    Chen, S; Cowan, C N; Grant, P M

    1991-01-01

    The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.

  18. Systolic algorithms and their implementation

    SciTech Connect

    Kung, H.T.

    1984-01-01

    Very high performance computer systems must rely heavily on parallelism since there are severe physical and technological limits on the ultimate speed of any single processor. The systolic array concept developed in the last several years allows effective use of a very large number of processors in parallel. This article illustrates the basic ideas by reviewing a systolic array design for matrix triangularization and describing its use in the on-the-fly updating of Cholesky decomposition of covariance matrices-a crucial computation in adaptive signal processing. Following this are discussions on issues related to the hardware implementation of systolic algorithms in general, and some guidelines for designing systolic algorithms that will be convenient for implementation. 33 references.

  19. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    SciTech Connect

    Dong, Xue; Niu, Tianye; Zhu, Lei

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical properties of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one

  20. A Decomposition Theorem for Finite Automata.

    ERIC Educational Resources Information Center

    Santa Coloma, Teresa L.; Tucci, Ralph P.

    1990-01-01

    Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)

  1. Robustness of the Koopman mode decomposition of the Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Budisic, Marko; Sondak, David

    2015-11-01

    Modal decompositions extract time-invariant spatial shapes (modes) and associated time-varying combination coefficients from a set of measurements of a dynamical process. In the past years, the Koopman mode decomposition and the related Dynamic Mode Decomposition (DMD) have been gaining prominence in fluid community, as it was demonstrated that Koopman modes and frequencies match the understanding of dynamical phenomena more often than the modes obtained by the Proper Orthogonal Decomposition (POD). Nevertheless, DMD algorithms are sensitive to measurement noise, and numerical instabilities can pollute the algorithm output. We employ statistical subsampling to estimate robustness of the Koopman modes computed from uniformly-sampled measurements based on non-uniform random subsamples of the same dataset, in an effort to separate the salient dynamical features and the unwanted numerical artifacts. The target application is the search for optimal heat transport modes in Rayleigh-Bénard convection, under the assumption that such structures can be robustly observed even in a fully-developed turbulence.

  2. Identification of Clathrate Hydrates, Hexagonal Ice, Cubic Ice, and Liquid Water in Simulations: the CHILL+ Algorithm.

    PubMed

    Nguyen, Andrew H; Molinero, Valeria

    2015-07-23

    Clathrate hydrates and ice I are the most abundant crystals of water. The study of their nucleation, growth, and decomposition using molecular simulations requires an accurate and efficient algorithm that distinguishes water molecules that belong to each of these crystals and the liquid phase. Existing algorithms identify ice or clathrates, but not both. This poses a challenge for cases in which ice and hydrate coexist, such as in the synthesis of clathrates from ice and the formation of ice from clathrates during self-preservation of methane hydrates. Here we present an efficient algorithm for the identification of clathrate hydrates, hexagonal ice, cubic ice, and liquid water in molecular simulations. CHILL+ uses the number of staggered and eclipsed water-water bonds to identify water molecules in cubic ice, hexagonal ice, and clathrate hydrate. CHILL+ is an extension of CHILL (Moore et al. Phys. Chem. Chem. Phys. 2010, 12, 4124-4134), which identifies hexagonal and cubic ice but not clathrates. In addition to the identification of hydrates, CHILL+ significantly improves the detection of hexagonal ice up to its melting point. We validate the use of CHILL+ for the identification of stacking faults in ice and the nucleation and growth of clathrate hydrates. To our knowledge, this is the first algorithm that allows for the simultaneous identification of ice and clathrate hydrates, and it does so in a way that is competitive with respect to existing methods used to identify any of these crystals. PMID:25389702

  3. Hardware Implementation of Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Majumder, Swanirbhar; Shaw, Anil Kumar; Sarkar, Subir Kumar

    2016-06-01

    Singular value decomposition (SVD) is a useful decomposition technique which has important role in various engineering fields such as image compression, watermarking, signal processing, and numerous others. SVD does not involve convolution operation, which make it more suitable for hardware implementation, unlike the most popular transforms. This paper reviews the various methods of hardware implementation for SVD computation. This paper also studies the time complexity and hardware complexity in various methods of SVD computation.

  4. Moisture drives surface decomposition in thawing tundra

    NASA Astrophysics Data System (ADS)

    Hicks Pries, Caitlin E.; Schuur, E. A. G.; Vogel, Jason G.; Natali, Susan M.

    2013-07-01

    Permafrost thaw can affect decomposition rates by changing environmental conditions and litter quality. As permafrost thaws, soils warm and thermokarst (ground subsidence) features form, causing some areas to become wetter while other areas become drier. We used a common substrate to measure how permafrost thaw affects decomposition rates in the surface soil in a natural permafrost thaw gradient and a warming experiment in Healy, Alaska. Permafrost thaw also changes plant community composition. We decomposed 12 plant litters in a common garden to test how changing plant litter inputs would affect decomposition. We combined species' tissue-specific decomposition rates with species and tissue-level estimates of aboveground net primary productivity to calculate community-weighted decomposition constants at both the thaw gradient and warming experiment. Moisture, specifically growing season precipitation and water table depth, was the most significant driver of decomposition. At the gradient, an increase in growing season precipitation from 200 to 300 mm increased mass loss of the common substrate by 100%. At the warming experiment, a decrease in the depth to the water table from 30 to 15 cm increased mass loss by 100%. At the gradient, community-weighted decomposition was 21% faster in extensive than in minimal thaw, but was similar when moss production was included. Overall, the effect of climate change and permafrost thaw on surface soil decomposition are driven more by precipitation and soil environment than by changes to plant communities. Increasing soil moisture is thereby another mechanism by which permafrost thaw can become a positive feedback to climate change.

  5. Asbestos-induced decomposition of hydrogen peroxide

    SciTech Connect

    Eberhardt, M.K.; Roman-Franco, A.A.; Quiles, M.R.

    1985-08-01

    Decomposition of H/sub 2/O/sub 2/ by chrysotile asbestos was demonstrated employing titration with KMnO/sub 4/. The participation of OH radicals in this process was delineated employing the OH radical scavenger dimethyl sulfoxide (DMSO). A mechanism involving the Fenton and Haber-Weiss reactions as the pathway for the H/sub 2/O/sub 2/ decomposition and OH radical production is postulated.

  6. High Temperature Decomposition of Hydrogen Peroxide

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2004-01-01

    Nitric oxide (NO) is oxidized into nitrogen dioxide (NO2) by the high temperature decomposition of a hydrogen peroxide solution to produce the oxidative free radicals, hydroxyl and hydropemxyl. The hydrogen peroxide solution is impinged upon a heated surface in a stream of nitric oxide where it decomposes to produce the oxidative free radicals. Because the decomposition of the hydrogen peroxide solution occurs within the stream of the nitric oxide, rapid gas-phase oxidation of nitric oxide into nitrogen dioxide occurs.

  7. High temperature decomposition of hydrogen peroxide

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2005-01-01

    Nitric oxide (NO) is oxidized into nitrogen dioxide (NO2) by the high temperature decomposition of a hydrogen peroxide solution to produce the oxidative free radicals, hydroxyl and hydroperoxyl. The hydrogen peroxide solution is impinged upon a heated surface in a stream of nitric oxide where it decomposes to produce the oxidative free radicals. Because the decomposition of the hydrogen peroxide solution occurs within the stream of the nitric oxide, rapid gas-phase oxidation of nitric oxide into nitrogen dioxide occurs.

  8. A Parallel Algorithm for Contact in a Finite Element Hydrocode

    SciTech Connect

    Pierce, T G

    2003-06-01

    A parallel algorithm is developed for contact/impact of multiple three dimensional bodies undergoing large deformation. As time progresses the relative positions of contact between the multiple bodies changes as collision and sliding occurs. The parallel algorithm is capable of tracking these changes and enforcing an impenetrability constraint and momentum transfer across the surfaces in contact. Portions of the various surfaces of the bodies are assigned to the processors of a distributed-memory parallel machine in an arbitrary fashion, known as the primary decomposition. A secondary, dynamic decomposition is utilized to bring opposing sections of the contacting surfaces together on the same processors, so that opposing forces may be balanced and the resultant deformation of the bodies calculated. The secondary decomposition is accomplished and updated using only local communication with a limited subset of neighbor processors. Each processor represents both a domain of the primary decomposition and a domain of the secondary, or contact, decomposition. Thus each processor has four sets of neighbor processors: (a) those processors which represent regions adjacent to it in the primary decomposition, (b) those processors which represent regions adjacent to it in the contact decomposition, (c) those processors which send it the data from which it constructs its contact domain, and (d) those processors to which it sends its primary domain data, from which they construct their contact domains. The latter three of these neighbor sets change dynamically as the simulation progresses. By constraining all communication to these sets of neighbors, all global communication, with its attendant nonscalable performance, is avoided. A set of tests are provided to measure the degree of scalability achieved by this algorithm on up to 1024 processors. Issues related to the operating system of the test platform which lead to some degradation of the results are analyzed. This algorithm

  9. Octree-based segmentation for terrestrial LiDAR point cloud data in industrial applications

    NASA Astrophysics Data System (ADS)

    Su, Yun-Ting; Bethel, James; Hu, Shuowen

    2016-03-01

    Automated and efficient algorithms to perform segmentation of terrestrial LiDAR data is critical for exploitation of 3D point clouds, where the ultimate goal is CAD modeling of the segmented data. In this work, a novel segmentation technique is proposed, starting with octree decomposition to recursively divide the scene into octants or voxels, followed by a novel split and merge framework that uses graph theory and a series of connectivity analyses to intelligently merge components into larger connected components. The connectivity analysis, based on a combination of proximity, orientation, and curvature connectivity criteria, is designed for the segmentation of pipes, vessels, and walls from terrestrial LiDAR data of piping systems at industrial sites, such as oil refineries, chemical plants, and steel mills. The proposed segmentation method is exercised on two terrestrial LiDAR datasets of a steel mill and a chemical plant, demonstrating its ability to correctly reassemble and segregate features of interest.

  10. Unimolecular thermal decomposition of dimethoxybenzenes

    SciTech Connect

    Robichaud, David J. Mukarakate, Calvin; Nimlos, Mark R.; Scheer, Adam M.; Ormond, Thomas K.; Buckingham, Grant T.; Ellison, G. Barney

    2014-06-21

    The unimolecular thermal decomposition mechanisms of o-, m-, and p-dimethoxybenzene (CH{sub 3}O-C{sub 6}H{sub 4}-OCH{sub 3}) have been studied using a high temperature, microtubular (μtubular) SiC reactor with a residence time of 100 μs. Product detection was carried out using single photon ionization (SPI, 10.487 eV) and resonance enhanced multiphoton ionization (REMPI) time-of-flight mass spectrometry and matrix infrared absorption spectroscopy from 400 K to 1600 K. The initial pyrolytic step for each isomer is methoxy bond homolysis to eliminate methyl radical. Subsequent thermolysis is unique for each isomer. In the case of o-CH{sub 3}O-C{sub 6}H{sub 4}-OCH{sub 3}, intramolecular H-transfer dominates leading to the formation of o-hydroxybenzaldehyde (o-HO-C{sub 6}H{sub 4}-CHO) and phenol (C{sub 6}H{sub 5}OH). Para-CH{sub 3}O-C{sub 6}H{sub 4}-OCH{sub 3} immediately breaks the second methoxy bond to form p-benzoquinone, which decomposes further to cyclopentadienone (C{sub 5}H{sub 4}=O). Finally, the m-CH{sub 3}O-C{sub 6}H{sub 4}-OCH{sub 3} isomer will predominantly follow a ring-reduction/CO-elimination mechanism to form C{sub 5}H{sub 4}=O. Electronic structure calculations and transition state theory are used to confirm mechanisms and comment on kinetics. Implications for lignin pyrolysis are discussed.

  11. Domain and range decomposition methods for coded aperture x-ray coherent scatter imaging

    NASA Astrophysics Data System (ADS)

    Odinaka, Ikenna; Kaganovsky, Yan; O'Sullivan, Joseph A.; Politte, David G.; Holmgren, Andrew D.; Greenberg, Joel A.; Carin, Lawrence; Brady, David J.

    2016-05-01

    Coded aperture X-ray coherent scatter imaging is a novel modality for ascertaining the molecular structure of an object. Measurements from different spatial locations and spectral channels in the object are multiplexed through a radiopaque material (coded aperture) onto the detectors. Iterative algorithms such as penalized expectation maximization (EM) and fully separable spectrally-grouped edge-preserving reconstruction have been proposed to recover the spatially-dependent coherent scatter spectral image from the multiplexed measurements. Such image recovery methods fall into the category of domain decomposition methods since they recover independent pieces of the image at a time. Ordered subsets has also been utilized in conjunction with penalized EM to accelerate its convergence. Ordered subsets is a range decomposition method because it uses parts of the measurements at a time to recover the image. In this paper, we analyze domain and range decomposition methods as they apply to coded aperture X-ray coherent scatter imaging using a spectrally-grouped edge-preserving regularizer and discuss the implications of the increased availability of parallel computational architecture on the choice of decomposition methods. We present results of applying the decomposition methods on experimental coded aperture X-ray coherent scatter measurements. Based on the results, an underlying observation is that updating different parts of the image or using different parts of the measurements in parallel, decreases the rate of convergence, whereas using the parts sequentially can accelerate the rate of convergence.

  12. Fluctuation elimination of fringe pattern by using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Zhang, E.

    2013-12-01

    As one of the most important direction of non-contact 3D shape measuring method, optical technology has been widely applied in the fields of industrial production, automatic detection, quality control, machine vision, cultural preservation, and so on. With the advent and development of high performance devices such as DLP (Digital Light Processing) projector and CCD camera, digital fringe pattern projection techniques have become a rapidly developing area. However, when four-step phase-shifting algorithm is used to calculate the wrapped phase, the intensity fluctuation of the captured fringe patterns may affect the accuracy of the final measurement results. This paper presents a novel method to eliminate the intensity fluctuation of the captured fringe patterns by using EMD (Empirical Mode Decomposition) algorithm. Four fringe patterns which have pi/2 phase shift in between need to be captured for four-step phase-shifting algorithm. In order to eliminate the intensity fluctuation between fringe patterns, every fringe pattern is decomposed into a number of IMFs (Intrinsic Mode Function) by using EMD. After being processed, the four fringe patterns have the same background light intensity and contrast. Both simulated and experimental data are tested to verify the validity of the proposed method. The results show that the intensity fluctuation between fringe patterns can be effectively eliminated to give accurate phase data.

  13. Domain decomposition approach to flexible multibody dynamics simulation

    NASA Astrophysics Data System (ADS)

    Kwak, JunYoung; Chun, TaeYoung; Shin, SangJoon; Bauchau, Olivier A.

    2014-01-01

    Finite element based formulations for flexible multibody systems are becoming increasingly popular and as the complexity of the configurations to be treated increases, so does the computational cost. It seems natural to investigate the applicability of parallel processing to this type of problems; domain decomposition techniques have been used extensively for this purpose. In this approach, the computational domain is divided into non-overlapping sub-domains, and the continuity of the displacement field across sub-domain boundaries is enforced via the Lagrange multiplier technique. In the finite element literature, this approach is presented as a mathematical algorithm that enables parallel processing. In this paper, the divided system is viewed as a flexible multibody system, and the sub-domains are connected by kinematic constraints. Consequently, all the techniques applicable to the enforcement of constraints in multibody systems become applicable to the present problem. In particular, it is shown that a combination of the localized Lagrange multiplier technique with the augmented Lagrange formulation leads to interesting solution strategies. The proposed algorithm is compared with the well-known FETI approach with regards to convergence and efficiency characteristics. The present algorithm is relatively simple and leads to improved convergence and efficiency characteristics. Finally, implementation on a parallel computer was conducted for the proposed approach.

  14. MRF energy minimization and beyond via dual decomposition.

    PubMed

    Komodakis, Nikos; Paragios, Nikos; Tziritas, Georgios

    2011-03-01

    This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach. PMID:20479493

  15. Evaluation of document binarization using eigen value decomposition

    NASA Astrophysics Data System (ADS)

    Kumar, Deepak; Anil Prasad, M. N.; Ramakrishnan, A. G.

    2013-01-01

    A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.

  16. Agile multi-scale decompositions for automatic image registration

    NASA Astrophysics Data System (ADS)

    Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline

    2016-05-01

    In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the mixed MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.

  17. A new and fast approach towards sEMG decomposition.

    PubMed

    Gligorijević, Ivan; van Dijk, Johannes P; Mijović, Bogdan; Van Huffel, Sabine; Blok, Joleen H; De Vos, Maarten

    2013-05-01

    The decomposition of high-density surface EMG (HD-sEMG) interference patterns into the contribution of motor units is still a challenging task. We introduce a new, fast solution to this problem. The method uses a data-driven approach for selecting a set of electrodes to enable discrimination of present motor unit action potentials (MUAPs). Then, using shapes detected on these channels, the hierarchical clustering algorithm as reported by Quian Quiroga et al. (Neural Comput 16:1661-1687, 2004) is extended for multichannel data in order to obtain the motor unit action potential (MUAP) signatures. After this first step, more motor unit firings are obtained using the extracted signatures by a novel demixing technique. In this demixing stage, we propose a time-efficient solution for the general convolutive system that models the motor unit firings on the HD-sEMG grid. We constrain this system by using the extracted signatures as prior knowledge and reconstruct the firing patterns in a computationally efficient way. The algorithm performance is successfully verified on simulated data containing up to 20 different MUAP signatures. Moreover, we tested the method on real low contraction recordings from the lateral vastus leg muscle by comparing the algorithm's output to the results obtained by manual analysis of the data from two independent trained operators. The proposed method showed to perform about equally successful as the operators.

  18. Proximal Point Methods Revisited

    NASA Astrophysics Data System (ADS)

    Boikanyo, Oganeditse A.; Moroşanu, Gheorghe

    2011-09-01

    The proximal point methods have been widely used in the last decades to approximate the solutions of nonlinear equations associated with monotone operators. Inspired by the iterative procedure defined by B. Martinet (1970), R.T. Rockafellar introduced in 1976 the so-called proximal point algorithm (PPA) for a general maximal monotone operator. The sequence generated by this iterative method is weakly convergent under appropriate conditions, but not necessarily strongly convergent, as proved by O. Güler (1991). This fact explains the introduction of different modified versions of the PPA which generate strongly convergent sequences under appropriate conditions, including the contraction-PPA defined by H.K. Xu in 2002. Here we discuss Xu's modified PPA as well as some of its generalizations. Special attention is paid to the computational errors, in particular the original Rockafellar summability assumption is replaced by the condition that the error sequence converges to zero strongly.

  19. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  20. Exploring Multimodal Data Fusion Through Joint Decompositions with Flexible Couplings

    NASA Astrophysics Data System (ADS)

    Cabral Farias, Rodrigo; Cohen, Jeremy Emile; Comon, Pierre

    2016-09-01

    A Bayesian framework is proposed to define flexible coupling models for joint tensor decompositions of multiple data sets. Under this framework, a natural formulation of the data fusion problem is to cast it in terms of a joint maximum a posteriori (MAP) estimator. Data driven scenarios of joint posterior distributions are provided, including general Gaussian priors and non Gaussian coupling priors. We present and discuss implementation issues of algorithms used to obtain the joint MAP estimator. We also show how this framework can be adapted to tackle the problem of joint decompositions of large datasets. In the case of a conditional Gaussian coupling with a linear transformation, we give theoretical bounds on the data fusion performance using the Bayesian Cramer-Rao bound. Simulations are reported for hybrid coupling models ranging from simple additive Gaussian models, to Gamma-type models with positive variables and to the coupling of data sets which are inherently of different size due to different resolution of the measurement devices.