Science.gov

Sample records for point decomposition algorithm

  1. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  2. Finding corner point correspondence from wavelet decomposition of image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; LeMoigne, Jacqueline

    1997-01-01

    A time efficient algorithm for image registration between two images that differ in translation is discussed. The algorithm is based on coarse-fine strategy using wavelet decomposition of both the images. The wavelet decomposition serves two different purposes: (1) its high frequency components are used to detect feature points (corner points here) and (2) it provides coarse-to-fine structure for making the algorithm time efficient. The algorithm is based on detecting the corner points from one of the images called reference image and computing corresponding points from the other image called test image by using local correlations using 7x7 windows centered around the corner points. The corresponding points are detected at the lowest decomposition level in a search area of about 11x11 (depending on the translation) and potential points of correspondence are projected onto higher levels. In the subsequent levels the local correlations are computed in a search area of no more than 3x3 for refinement of the correspondence.

  3. Faster Algorithms on Branch and Clique Decompositions

    NASA Astrophysics Data System (ADS)

    Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin

    We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.

  4. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  5. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  6. A convergent hybrid decomposition algorithm model for SVM training.

    PubMed

    Lucidi, Stefano; Palagi, Laura; Risi, Arnaldo; Sciandrone, Marco

    2009-06-01

    Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach. PMID:19435679

  7. Domain decomposition algorithms and computation fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.

  8. Nonparametric decomposition of quasi-periodic time series for change-point detection

    NASA Astrophysics Data System (ADS)

    Artemov, Alexey; Burnaev, Evgeny; Lokot, Andrey

    2015-12-01

    The paper is concerned with the sequential online change-point detection problem for a dynamical system driven by a quasiperiodic stochastic process. We propose a multicomponent time series model and an effective online decomposition algorithm to approximate the components of the models. Assuming the stationarity of the obtained components, we approach the change-point detection problem on a per-component basis and propose two online change-point detection schemes corresponding to two real-world scenarios. Experimental results for decomposition and detection algorithms for synthesized and real-world datasets are provided to demonstrate the efficiency of our change-point detection framework.

  9. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  10. The Empirical Mode Decomposition algorithm via Fast Fourier Transform

    NASA Astrophysics Data System (ADS)

    Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Artemyev, Dmitry N.; Khramov, Alexander G.

    2014-09-01

    In this paper we consider a problem of implementing a fast algorithm for the Empirical Mode Decomposition (EMD). EMD is one of the newest methods for decomposition of non-linear and non-stationary signals. A basis of EMD is formed "on-the-fly", i.e. it depends from a distribution of the signal and not given a priori in contrast on cases Fourier Transform (FT) or Wavelet Transform (WT). The EMD requires interpolating of local extrema sets of signal to find upper and lower envelopes. The data interpolation on an irregular lattice is a very low-performance procedure. A classical description of EMD by Huang suggests doing this through splines, i.e. through solving of a system of equations. Existence of a fast algorithm is the main advantage of the FT. A simple description of an algorithm in terms of Fast Fourier Transform (FFT) is a standard practice to reduce operation's count. We offer a fast implementation of EMD (FEMD) through FFT and some other cost-efficient algorithms. Basic two-stage interpolation algorithm for EMD is composed of a Upscale procedure through FFT and Downscale procedure through a selection procedure for signal's points. First we consider the local maxima (or minima) set without reference to the axis OX, i.e. on a regular lattice. The Upscale through the FFT change the signal's length to the Least Common Multiple (LCM) value of all distances between neighboring extremes on the axis OX. If the LCM value is too large then it is necessary to limit local set of extrema. In this case it is an analog of the spline interpolation. A demo for FEMD in noise reduction task for OCT has been shown.

  11. An accurate product SVD (singular value decomposition) algorithm

    SciTech Connect

    Bojanczyk, A.W.; Luk, F.T. . School of Electrical Engineering); Ewerbring, M. ); Van Dooren, P. )

    1990-01-01

    In this paper, we propose a new algorithm for computing a singular value decomposition of a product of three matrices. We show that our algorithm is numerically desirable in that all relevant residual elements will be numerically small. 12 refs., 1 tab.

  12. Enhanced decomposition algorithm for multistage stochastic hydroelectric scheduling. Technical report

    SciTech Connect

    Morton, D.P.

    1994-01-01

    Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.

  13. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGESBeta

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; Wu, Kun -Lung; Catalyurek, Umit V.

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  14. Efficient variants of the vertex space domain decomposition algorithm

    SciTech Connect

    Chan, T.F.; Shao, J.P. . Dept. of Mathematics); Mathew, T.P. . Dept. of Mathematics)

    1994-11-01

    Several variants of the vertex space algorithm of Smith for two-dimensional elliptic problems are described. The vertex space algorithm is a domain decomposition method based on nonoverlapping subregions, in which the reduced Schur complement system on the interface is solved using a generalized block Jacobi-type preconditioner, with the blocks corresponding to the vertex space, edges, and a coarse grid. Two kinds of approximations are considered for the edge and vertex space subblocks, one based on Fourier approximation, and another based on an algebraic probing technique in which sparse approximations to these subblocks are computed. The motivation is to improve the efficiency of the algorithm without sacrificing the optimal convergence rate. Numerical and theoretical results on the performance of these algorithms, including variants of an algorithm of Bramble, Pasciak, and Schatz are presented.

  15. A point matching algorithm based on reference point pair

    NASA Astrophysics Data System (ADS)

    Zou, Huanxin; Zhu, Youqing; Zhou, Shilin; Lei, Lin

    2016-03-01

    Outliers and occlusions are important degradation in the real application of point matching. In this paper, a novel point matching algorithm based on the reference point pairs is proposed. In each iteration, it firstly eliminates the dubious matches to obtain the relatively accurate matching points (reference point pairs), and then calculates the shape contexts of the removed points with reference to them. After re-matching the removed points, the reference point pairs are combined to achieve better correspondences. Experiments on synthetic data validate the advantages of our method in comparison with some classical methods.

  16. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  17. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  18. Implementation and performance of a domain decomposition algorithm in Sisal

    SciTech Connect

    DeBoni, T.; Feo, J.; Rodrigue, G.; Muller, J.

    1993-09-23

    Sisal is a general-purpose functional language that hides the complexity of parallel processing, expedites parallel program development, and guarantees determinacy. Parallelism and management of concurrent tasks are realized automatically by the compiler and runtime system. Spatial domain decomposition is a widely-used method that focuses computational resources on the most active, or important, areas of a domain. Many complex programming issues are introduced in paralleling this method including: dynamic spatial refinement, dynamic grid partitioning and fusion, task distribution, data distribution, and load balancing. In this paper, we describe a spatial domain decomposition algorithm programmed in Sisal. We explain the compilation process, and present the execution performance of the resultant code on two different multiprocessor systems: a multiprocessor vector supercomputer, and cache-coherent scalar multiprocessor.

  19. Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm

    SciTech Connect

    Yao, Y

    2008-02-08

    Semantic graphs have become key components in analyzing complex systems such as the Internet, or biological and social networks. These types of graphs generally consist of sparsely connected clusters or 'communities' whose nodes are more densely connected to each other than to other nodes in the graph. The identification of these communities is invaluable in facilitating the visualization, understanding, and analysis of large graphs by producing subgraphs of related data whose interrelationships can be readily characterized. Unfortunately, the ability of LLNL to effectively analyze the terabytes of multisource data at its disposal has remained elusive, since existing decomposition algorithms become computationally prohibitive for graphs of this size. We have addressed this limitation by developing more efficient algorithms for discerning community structure that can effectively process massive graphs. Current algorithms for detecting community structure, such as the high quality algorithm developed by Girvan and Newman [1], are only capable of processing relatively small graphs. The cubic complexity of Girvan and Newman, for example, makes it impractical for graphs with more than approximately 10{sup 4} nodes. Our goal for this project was to develop methodologies and corresponding algorithms capable of effectively processing graphs with up to 10{sup 9} nodes. From a practical standpoint, we expect the developed scalable algorithms to help resolve a variety of operational issues associated with the productive use of semantic graphs at LLNL. During FY07, we completed a graph clustering implementation that leverages a dynamic graph transformation to more efficiently decompose large graphs. In essence, our approach dynamically transforms the graph (or subgraphs) into a tree structure consisting of biconnected components interconnected by bridge links. This isomorphism allows us to compute edge betweenness, the chief source of inefficiency in Girvan and Newman

  20. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Weerapurage, Dinesh P; Sullivan, Blair D; Groer, Christopher S

    2013-01-01

    Although many NP-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of required dynamic programming tables and excessive running times of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree-decomposition based approach to solve maximum weighted independent set. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  1. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  2. Singular value decomposition utilizing parallel algorithms on graphical processors

    SciTech Connect

    Kotas, Charlotte W; Barhen, Jacob

    2011-01-01

    One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements for a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder

  3. On the equivalence of a class of inverse decomposition algorithms for solving systems of linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A class of direct inverse decomposition algorithms for solving systems of linear equations is presented. Their behavior in the presence of round-off errors is analyzed. It is shown that under some mild restrictions on their implementation, the class of direct inverse decomposition algorithms presented are equivalent in terms of the error complexity measures.

  4. A one-sided Jacobi algorithm for computing the singular value decomposition on a vector computer

    SciTech Connect

    De Rijk, P.P.M. )

    1989-03-01

    An old algorithm for computing the singular value decomposition, which was first mentioned by Hestenes has gained renewed interest because of its properties of parallelism and vectorizability. Some computational modifications are given and a comparison with the well-known Golub-Reinsch algorithm is made. In this paper comparative experiments on CYBER 205 are reported.

  5. An optimal and efficient new gridding algorithm using singular value decomposition.

    PubMed

    Rosenfeld, D

    1998-07-01

    The problem of handling data that falls on a nonequally spaced grid occurs in numerous fields of science, ranging from radio-astronomy to medical imaging. In MRI, this condition arises when sampling under time-varying gradients in sequences such as echo-planar imaging (EPI), spiral scans, or radial scans. The technique currently being used to interpolate the nonuniform samples onto a Cartesian grid is called the gridding algorithm. In this paper, a new method for uniform resampling is presented that is both optimal and efficient. It is first shown that the resampling problem can be formulated as a problem of solving a set of linear equations Ax = b, where x and b are vectors of the uniform and nonuniform samples, respectively, and A is a matrix of the sinc interpolation coefficients. In a procedure called Uniform Re-Sampling (URS), this set of equations is given an optimal solution using the pseudoinverse matrix which is computed using singular value decomposition (SVD). In large problems, this solution is neither practical nor computationally efficient. Another method is presented, called the Block Uniform Re-Sampling (BURS) algorithm, which decomposes the problem into solving a small set of linear equations for each uniform grid point. These equations are a subset of the original equations Ax = b and are once again solved using SVD. The final result is both optimal and computationally efficient. The results of the new method are compared with those obtained using the conventional gridding algorithm via simulations. PMID:9660548

  6. Chaotic Visual Cryptosystem Using Empirical Mode Decomposition Algorithm for Clinical EEG Signals.

    PubMed

    Lin, Chin-Feng

    2016-03-01

    This paper, proposes a chaotic visual cryptosystem using an empirical mode decomposition (EMD) algorithm for clinical electroencephalography (EEG) signals. The basic design concept is to integrate two-dimensional (2D) chaos-based encryption scramblers, the EMD algorithm, and a 2D block interleaver method to achieve a robust and unpredictable visual encryption mechanism. Energy-intrinsic mode function (IMF) distribution features of the clinical EEG signal are developed for chaotic encryption parameters. The maximum and second maximum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the starting points of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. The minimum and second minimum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the security level parameters of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. Three EEG database, and seventeen clinical EEG signals were tested, and the average r and mse values are 0.0201 and 4.2626 × 10(-29), respectively, for the original and chaotically-encrypted through EMD clinical EEG signals. The chaotically-encrypted signal cannot be recovered if there is an error in the input parameters, for example, an initial point error of 0.000001 %. The encryption effects of the proposed chaotic EMD visual encryption mechanism are excellent. PMID:26645316

  7. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  8. Improved MCA-TV algorithm for interference hyperspectral image decomposition

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Zhao, Junsuo; Cailing, Wang

    2015-12-01

    The technology of interference hyperspectral imaging, which can get the spectral and spatial information of the observed targets, is a very powerful technology in the field of remote sensing. Due to the special imaging principle, there are many position-fixed interference fringes in each frame of the interference hyperspectral image (IHI) data. This characteristic will affect the result of compressed sensing theory and traditional compression algorithms used on IHI data. According to this characteristic of the IHI data, morphological component analysis (MCA) is adopted to separate the interference fringes layers and the background layers of the LSMIS (Large Spatially Modulated Interference Spectral Image) data, and an improved MCA and Total Variation (TV) combined algorithm is proposed in this paper. An update mode of the threshold in traditional MCA is proposed, and the traditional TV algorithm is also improved according to the unidirectional characteristic of the interference fringes in IHI data. The experimental results prove that the proposed improved MCA-TV (IMT) algorithm can get better results than the traditional MCA, and also can meet the convergence conditions much faster than the traditional MCA.

  9. Automated decomposition algorithm for Raman spectra based on a Voigt line profile model.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2016-05-20

    Raman spectra measured by spectrometers usually suffer from band overlap and random noise. In this paper, an automated decomposition algorithm based on a Voigt line profile model for Raman spectra is proposed to solve this problem. To decompose a measured Raman spectrum, a Voigt line profile model is introduced to parameterize the measured spectrum, and a Gaussian function is used as the instrumental broadening function. Hence, the issue of spectral decomposition is transformed into a multiparameter optimization problem of the Voigt line profile model parameters. The algorithm can eliminate instrumental broadening, obtain a recovered Raman spectrum, resolve overlapping bands, and suppress random noise simultaneously. Moreover, the recovered spectrum can be decomposed to a group of Lorentzian functions. Experimental results on simulated Raman spectra show that the performance of this algorithm is much better than a commonly used blind deconvolution method. The algorithm has also been tested on the industrial Raman spectra of ortho-xylene and proved to be effective. PMID:27411136

  10. Multiobjective biogeography based optimization algorithm with decomposition for community detection in dynamic networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Li, Bin; Sun, Geng

    2015-10-01

    Identifying community structures in static network misses the opportunity to capture the evolutionary patterns. So community detection in dynamic network has attracted many researchers. In this paper, a multiobjective biogeography based optimization algorithm with decomposition (MBBOD) is proposed to solve community detection problem in dynamic networks. In the proposed algorithm, the decomposition mechanism is adopted to optimize two evaluation objectives named modularity and normalized mutual information simultaneously, which measure the quality of the community partitions and temporal cost respectively. A novel sorting strategy for multiobjective biogeography based optimization is presented for comparing quality of habitats to get species counts. In addition, problem-specific migration and mutation model are introduced to improve the effectiveness of the new algorithm. Experimental results both on synthetic and real networks demonstrate that our algorithm is effective and promising, and it can detect communities more accurately in dynamic networks compared with DYNMOGA and FaceNet.

  11. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  12. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    SciTech Connect

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array. Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.

  13. Determination of the Thermal Decomposition Products of Terephthalic Acid by Using Curie-Point Pyrolyzer

    NASA Astrophysics Data System (ADS)

    Begüm Elmas Kimyonok, A.; Ulutürk, Mehmet

    2016-04-01

    The thermal decomposition behavior of terephthalic acid (TA) was investigated by thermogravimetry/differential thermal analysis (TG/DTA) and Curie-point pyrolysis. TG/DTA analysis showed that TA is sublimed at 276°C prior to decomposition. Pyrolysis studies were carried out at various temperatures ranging from 160 to 764°C. Decomposition products were analyzed and their structures were determined by gas chromatography-mass spectrometry (GC-MS). A total of 11 degradation products were identified at 764°C, whereas no peak was observed below 445°C. Benzene, benzoic acid, and 1,1‧-biphenyl were identified as the major decomposition products, and other degradation products such as toluene, benzophenone, diphenylmethane, styrene, benzaldehyde, phenol, 9H-fluorene, and 9-phenyl 9H-fluorene were also detected. A pyrolysis mechanism was proposed based on the findings.

  14. A domain decomposition algorithm for solving large elliptic problems

    SciTech Connect

    Nolan, M.P.

    1991-01-01

    AN algorithm which efficiently solves large systems of equations arising from the discretization of a single second-order elliptic partial differential equation is discussed. The global domain is partitioned into not necessarily disjoint subdomains which are traversed using the Schwarz Alternating Procedure. On each subdomain the multigrid method is used to advance the solution. The algorithm has the potential to decrease solution time when data is stored across multiple levels of a memory hierarchy. Results are presented for a virtual memory, vector multiprocessor architecture. A study of choice of inner iteration procedure and subdomain overlap is presented for a model problem, solved with two and four subdomains, sequentially and in parallel. Microtasking multiprocessing results are reported for multigrid on the Alliant FX-8 vector-multiprocessor. A convergence proof for a class of matrix splittings for the two-dimensional Helmholtz equation is given. 70 refs., 3 figs., 20 tabs.

  15. Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data

    PubMed Central

    Clark, Darin P.; Badea, Cristian T.

    2014-01-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  16. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  17. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data

    NASA Astrophysics Data System (ADS)

    Clark, Darin P.; Badea, Cristian T.

    2014-10-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL-1), gold (0.9 mg mL-1), and gadolinium (2.9 mg mL-1) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  18. Sweeping algorithms for five-point stencils and banded matrices

    SciTech Connect

    Kwong, Man Kam.

    1992-06-01

    We record MATLAB experiments implementing the sweeping algorithms we proposed recently to solve five-point stencils arising from the discretization of partial differential equations, notably the Ginzburg-Landau equations from the theory of superconductivity. Algorithms tested include two-direction, multistage, and partial sweeping.

  19. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  20. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  1. Automatic outlier suppression for rigid coherent point drift algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Songlin; Tu, Ruibin; Niu, Zhaodong; Li, Na; Chen, Zengping

    2014-10-01

    Point pattern matching (PPM) including the hard assignment and soft assignment approaches has attracted much attention. The typical probability based method is Coherent Point Drift (CPD) algorithm, which treats one point set(named model point set) as centroids of Gaussian mixture model, and then fits it to the other(named target point set). It uses the expectation maximization (EM) framework, where the point correspondences and transformation parameters are updated alternately. But the anti-outlier performance of CPD is not robust enough as outliers have always been involved in operation until CPD converges. So we proposed an automatic outlier suppression mechanism (AOS) to overcome the shortages of CPD. Firstly, inliers or outliers are judged by converting matching probability matrix into doubly stochastic matrix. Then, transformation parameters are fitted using accurate matching point sets. Finally, the model point set is forced to move coherently to target point set by this transformation model. The transformed model point set is imported into EM iteration again and the cycle repeats itself. The iteration finishes when matching probability matrix converges or the cardinality of accurate matching point set reaches maximum. Besides, the covariance should be updated by the newest position error before re-entering EM algorithm. The experimental results based on both synthetic and real data indicate that compared with other algorithms, AOS-CPD is more robust and efficient. It offers a good practicability and accuracy in rigid PPM applications.

  2. Combining DC algorithms (DCAs) and decomposition techniques for the training of nonpositive-semidefinite kernels.

    PubMed

    Akoa, François Bertrand

    2008-11-01

    Today, decomposition methods are one of the most popular methods for training support vector machines (SVMs). With the use of kernels that do not satisfy Mercer's condition, new techniques must be designed to handle nonpositive-semidefinite kernels resulting to this choice. In this work we incorporate difference of convex (DC functions) optimization techniques into decomposition methods to tackle this difficulty. The new approach needs no problem modification and we show that the only use of a truncated DC algorithms (DCAs) in the decomposition scheme produces a sufficient decrease of the objective function at each iteration. Thanks to this property, an asymptotic convergence proof of the new algorithm is produced without any blockwise convexity assumption on the objective function. We also investigate a working set selection rule using second-order information for sequential minimal optimization (SMO)-type decomposition in the spirit of DC optimization. Numerical results show the robustness and the efficiency of the new methods compared with state-of-the-art software. PMID:18990641

  3. Monte Carlo algorithm for least dependent non-negative mixture decomposition.

    PubMed

    Astakhov, Sergey A; Stögbauer, Harald; Kraskov, Alexander; Grassberger, Peter

    2006-03-01

    We propose a simulated annealing algorithm (stochastic non-negative independent component analysis, SNICA) for blind decomposition of linear mixtures of non-negative sources with non-negative coefficients. The demixing is based on a Metropolis-type Monte Carlo search for least dependent components, with the mutual information between recovered components as a cost function and their non-negativity as a hard constraint. Elementary moves are shears in two-dimensional subspaces and rotations in three-dimensional subspaces. The algorithm is geared at decomposing signals whose probability densities peak at zero, the case typical in analytical spectroscopy and multivariate curve resolution. The decomposition performance on large samples of synthetic mixtures and experimental data is much better than that of traditional blind source separation methods based on principal component analysis (MILCA, FastICA, RADICAL) and chemometrics techniques (SIMPLISMA, ALS, BTEM). PMID:16503615

  4. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    SciTech Connect

    Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks.

  5. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  6. Decomposition-Based Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Social Networks

    PubMed Central

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806

  7. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  8. Polynomial interior-point algorithms for horizontal linear complementarity problem

    NASA Astrophysics Data System (ADS)

    Wang, G. Q.; Bai, Y. Q.

    2009-11-01

    In this paper a class of polynomial interior-point algorithms for horizontal linear complementarity problem based on a new parametric kernel function, with parameters p[set membership, variant][0,1] and [sigma]>=1, are presented. The proposed parametric kernel function is not exponentially convex and also not strongly convex like the usual kernel functions, and has a finite value at the boundary of the feasible region. It is used both for determining the search directions and for measuring the distance between the given iterate and the [mu]-center for the algorithm. The currently best known iteration bounds for the algorithm with large- and small-update methods are derived, namely, and , respectively, which reduce the gap between the practical behavior of the algorithms and their theoretical performance results. Numerical tests demonstrate the behavior of the algorithms for different results of the parameters p,[sigma] and [theta].

  9. A Random Algorithm for Low-Rank Decomposition of Large-Scale Matrices With Missing Entries.

    PubMed

    Liu, Yiguang; Lei, Yinjie; Li, Chunguang; Xu, Wenzheng; Pu, Yifei

    2015-11-01

    A random submatrix method (RSM) is proposed to calculate the low-rank decomposition U(m×r)V(n×r)(T) (r < m, n) of the matrix Y∈R(m×n) (assuming m > n generally) with known entry percentage 0 < ρ ≤ 1. RSM is very fast as only O(mr(2)ρ(r)) or O(n(3)ρ(3r)) floating-point operations (flops) are required, compared favorably with O(mnr+r(2)(m+n)) flops required by the state-of-the-art algorithms. Meanwhile, RSM has the advantage of a small memory requirement as only max(n(2),mr+nr) real values need to be saved. With the assumption that known entries are uniformly distributed in Y, submatrices formed by known entries are randomly selected from Y with statistical size k×nρ(k) or mρ(l)×l , where k or l takes r+1 usually. We propose and prove a theorem, under random noises the probability that the subspace associated with a smaller singular value will turn into the space associated to anyone of the r largest singular values is smaller. Based on the theorem, the nρ(k)-k null vectors or the l-r right singular vectors associated with the minor singular values are calculated for each submatrix. The vectors ought to be the null vectors of the submatrix formed by the chosen nρ(k) or l columns of the ground truth of V(T). If enough submatrices are randomly chosen, V and U can be estimated accordingly. The experimental results on random synthetic matrices with sizes such as 13 1072 ×10(24) and on real data sets such as dinosaur indicate that RSM is 4.30 ∼ 197.95 times faster than the state-of-the-art algorithms. It, meanwhile, has considerable high precision achieving or approximating to the best. PMID:26208344

  10. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  11. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  12. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    PubMed

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  13. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  14. Optimizing the decomposition of soil moisture time-series data using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Kulkarni, C.; Mengshoel, O. J.; Basak, A.; Schmidt, K. M.

    2015-12-01

    The task of determining near-surface volumetric water content (VWC), using commonly available dielectric sensors (based upon capacitance or frequency domain technology), is made challenging due to the presence of "noise" such as temperature-driven diurnal variations in the recorded data. We analyzed a post-wildfire rainfall and runoff monitoring dataset for hazard studies in Southern California. VWC was measured with EC-5 sensors manufactured by Decagon Devices. Many traditional signal smoothing techniques such as moving averages, splines, and Loess smoothing exist. Unfortunately, when applied to our post-wildfire dataset, these techniques diminish maxima, introduce time shifts, and diminish signal details. A promising seasonal trend-decomposition procedure based on Loess (STL) decomposes VWC time series into trend, seasonality, and remainder components. Unfortunately, STL with its default parameters produces similar results as previously mentioned smoothing methods. We propose a novel method to optimize seasonal decomposition using STL with genetic algorithms. This method successfully reduces "noise" including diurnal variations while preserving maxima, minima, and signal detail. Better decomposition results for the post-wildfire VWC dataset were achieved by optimizing STL's control parameters using genetic algorithms. The genetic algorithms minimize an additive objective function with three weighted terms: (i) root mean squared error (RMSE) of straight line relative to STL trend line; (ii) range of STL remainder; and (iii) variance of STL remainder. Our optimized STL method, combining trend and remainder, provides an improved representation of signal details by preserving maxima and minima as compared to the traditional smoothing techniques for the post-wildfire rainfall and runoff monitoring data. This method identifies short- and long-term VWC seasonality and provides trend and remainder data suitable for forecasting VWC in response to precipitation.

  15. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-04-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space

  16. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space

  17. Communication: Active space decomposition with multiple sites: Density matrix renormalization group algorithm

    SciTech Connect

    Parker, Shane M.; Shiozaki, Toru

    2014-12-07

    We extend the active space decomposition method, recently developed by us, to more than two active sites using the density matrix renormalization group algorithm. The fragment wave functions are described by complete or restricted active-space wave functions. Numerical results are shown on a benzene pentamer and a perylene diimide trimer. It is found that the truncation errors in our method decrease almost exponentially with respect to the number of renormalization states M, allowing for numerically exact calculations (to a few μE{sub h} or less) with M = 128 in both cases. This rapid convergence is because the renormalization steps are used only for the interfragment electron correlation.

  18. Parrallel Implementation of Fast Randomized Algorithms for Low Rank Matrix Decomposition

    SciTech Connect

    Lucas, Andrew J.; Stalizer, Mark; Feo, John T.

    2014-03-01

    We analyze the parallel performance of randomized interpolative decomposition by de- composing low rank complex-valued Gaussian random matrices larger than 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested.

  19. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems

  20. Parallel two-level domain decomposition based Jacobi-Davidson algorithms for pyramidal quantum dot simulation

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan

    2016-07-01

    We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.

  1. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  2. Object aggregation using merge-at-a-point algorithm

    NASA Astrophysics Data System (ADS)

    Salaria, Kanupriya; Darsono, Wiriyanto; Hinman, Michael; Linderman, Mark; Bai, Li

    2004-04-01

    This paper describes a novel technique to detect military convoy"s moving patterns using the Ground Moving Target Indicator (GMTI) data. The specific pattern studied here is the moving vehicle groups that are merging onto a prescribed location. The algorithm can be used to detect the military convoy"s identity so that the situation can be assessed to prevent hostile enemy military advancements. The technique uses the minimum error solution (MES) to predict the point of intersection of vehicle tracks. Comparing this point of intersection to the prescribed location it can be determined whether the vehicles are merging. Two tasks are performed to effectively determine the merged vehicle group patterns: 1) investigate necessary number of vehicles needed in the MES algorithms, and 2) analyze three decision rules for clustering the vehicle groups. The simulation has shown the accuracy (88.9% approx.) to detect the vehicle groups that merge at a prescribed location.

  3. Algorithm for astronomical, point source, signal to noise ratio calculations

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Schroeder, D. J.

    1984-01-01

    An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.

  4. Metabolic flux estimation--a self-adaptive evolutionary algorithm with singular value decomposition.

    PubMed

    Yang, Jing; Wongsa, Sarawan; Kadirkamanathan, Visakan; Billings, Stephen A; Wright, Phillip C

    2007-01-01

    Metabolic flux analysis is important for metabolic system regulation and intracellular pathway identification. A popular approach for intracellular flux estimation involves using 13C tracer experiments to label states that can be measured by nuclear magnetic resonance spectrometry or gas chromatography mass spectrometry. However, the bilinear balance equations derived from 13C tracer experiments and the noisy measurements require a nonlinear optimization approach to obtain the optimal solution. In this paper, the flux quantification problem is formulated as an error-minimization problem with equality and inequality constraints through the 13C balance and stoichiometric equations. The stoichiometric constraints are transformed to a null space by singular value decomposition. Self-adaptive evolutionary algorithms are then introduced for flux quantification. The performance of the evolutionary algorithm is compared with ordinary least squares estimation by the simulation of the central pentose phosphate pathway. The proposed algorithm is also applied to the central metabolism of Corynebacterium glutamicum under lysine-producing conditions. A comparison between the results from the proposed algorithm and data from the literature is given. The complexity of a metabolic system with bidirectional reactions is also investigated by analyzing the fluctuations in the flux estimates when available measurements are varied. PMID:17277420

  5. Despeckling algorithm on ultrasonic image using adaptive block-based singular value decomposition

    NASA Astrophysics Data System (ADS)

    Sae-Bae, Napa; Udomhunsakul, Somkait

    2008-03-01

    Speckle noise reduction is an important technique to enhance the quality of ultrasonic image. In this paper, a despeckling algorithm based on an adaptive block-based singular value decomposition filtering (BSVD) applied on ultrasonic images is presented. Instead of applying BSVD directly to ultrasonic image, we propose to apply BSVD on the noisy edge image version obtained from the difference between the logarithmic transformations of the original image and blur image version of its. The recovered image is performed by combining the speckle noise-free edge image with blur image version of its. Finally, exponential transformation is applied in order to get the reconstructed image. To evaluate our algorithm compared with well-know algorithms such as Lee filter, Kuan filter, Homomorphic Wiener filter, median filter and wavelet soft thresholding, four image quality measurements, which are Mean Square Error (MSE), Signal to MSE (S/MSE), Edge preservation (β), and Correlation measurement (ρ), are used. From the results, it clearly shows that the proposed algorithm outperforms other methods in terms of quantitative and subjective assessments.

  6. A 64-bit orthorectification algorithm using fixed-point arithmetic

    NASA Astrophysics Data System (ADS)

    French, Joseph C.; Balster, Eric J.; Turri, William F.

    2013-10-01

    As the cost of imaging systems have decreased, the quality and size has increased. This dynamic has made the practicality of many aerial imaging applications achievable such as cost line monitoring and vegetation indexing. Orthorectification is required for many of these applications; however, it is also expensive, computationally. The computational cost is due to oating point operations and divisions inherent in the orthorecti cation process. Two novel algorithm modi cations are proposed which signi cantly reduce the computational cost. The rst modi cation uses xed-point arithmetic in place of the oating point operations. The second replaces the division with a multiplication of the inverse. The result in an increase of 2x of the throughput while remaining within 15% of a pixel size in position.

  7. From Point Clouds to Architectural Models: Algorithms for Shape Reconstruction

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Falcolini, C.; Saccone, M.; Spadafora, G.

    2013-02-01

    The use of terrestrial laser scanners in architectural survey applications has become more and more common. Row data complexity, as given by scanner restitution, leads to several problems about design and 3D-modelling starting from Point Clouds. In this context we present a study on architectural sections and mathematical algorithms for their shape reconstruction, according to known or definite geometrical rules, focusing on shapes of different complexity. Each step of the semi-automatic algorithm has been developed using Mathematica software and CAD, integrating both programs in order to reconstruct a geometrical CAD model of the object. Our study is motivated by the fact that, for architectural survey, most of three dimensional modelling procedures concerning point clouds produce superabundant, but often unnecessary, information and are also very expensive in terms of cpu time using more and more sophisticated hardware and software. On the contrary, it's important to simplify/decimate the point cloud in order to recognize a particular form out of some definite geometric/architectonic shapes. Such a process consists of several steps: first the definition of plane sections and characterization of their architecture; secondly the construction of a continuous plane curve depending on some parameters. In the third step we allow the selection on the curve of some nodal points with given specific characteristics (symmetry, tangency conditions, shadowing exclusion, corners, … ). The fourth and last step is the construction of a best shape defined by the comparison with an abacus of known geometrical elements, such as moulding profiles, leading to a precise architectonical section. The algorithms have been developed and tested in very different situations and are presented in a case study of complex geometries such as some mouldings profiles in the Church of San Carlo alle Quattro Fontane.

  8. Secure 3D watermarking algorithm based on point set projection

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Zhang, Xiaomei

    2007-11-01

    3D digital models greatly facilitate the distribution and storage of information. While its copyright protection problems attract more and more research interests. A novel secure digital watermarking algorithm for 3D models is proposed in this paper. In order to survive most attacks like rotation, cropping, smoothing, adding noise, etc, the projection of the model's point set is chosen as the carrier of the watermark in the presented algorithm, in which contains the copyright information as logos, text, and so on. Then projection of the model's point set onto x, y and z plane are calculated respectively. Before watermark embedding process, the original watermark is scrambled by a key. Each projection is singular value decomposed, and the scrambled watermark is embedded into the SVD(singular value decomposed) domain of the above x, y and z plane respectively. After that we use the watermarked x, y and z plane to recover the vertices of the model and the watermarked model is attained. Only the legal user can remove the watermark from the watermarked models using the private key. Experiments are presented in the paper to show that the proposed algorithm has good performance on various malicious attacks.

  9. Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2004-01-01

    This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.

  10. Point of Care and Factor Concentrate-Based Coagulation Algorithms

    PubMed Central

    Theusinger, Oliver M.; Stein, Philipp; Levy, Jerrold H.

    2015-01-01

    In the last years it has become evident that the use of blood products should be reduced whenever possible. There is increasing evidence regarding serious adverse events, including higher mortality and morbidity, related to transfusions. The use of point of care (POC) devices integrated in algorithms is one of the important mechanisms to limit blood product exposure. Any type of algorithm, especially the POC-based ones, allows goal-directed transfusions of blood products and even better targeted factor concentrate substitutions. Different types of algorithms in different surgical settings (cardiac surgery, trauma, liver surgery etc.) have been established with growing interest in their use as they offer objective therapy for management and reduction of blood product use. The use of POC devices with evidence-based algorithms is important in the bleeding patient independent of its origin (traumatic vs. surgical). The use of factor concentrates compared to the classical blood products can be cost-saving, beneficial for the patient, and in agreement with the WHO-requested standard of care. The empiric and uncontrolled use of blood products such as fresh frozen plasma, red blood cells, and platelets without POC monitoring should no longer be followed with regard to actual evidence in literature. Furthermore, the use of factor concentrates may provide better outcomes and potential for cost saving. PMID:26019707

  11. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction

  12. New Advances in the Study of the Proximal Point Algorithm

    NASA Astrophysics Data System (ADS)

    Moroşanu, Gheorghe

    2010-09-01

    Consider in a real Hilbert space H the inexact, Halpern-type, proximal point algorithm xn+1 = αnu+(1-αn)Jβnxn+en, n = 0,1,…, (H—PPA) where u, x∈H are given points, Jβn = (I+βna) for a given maximal monotone operator A, and (en) is the error sequence, under new assumptions on αn∈(0,1) and βn∈(0,1). Several strong convergence results for the H—PPA are presented under the general condition that the error sequence converges strongly to zero, thus improving the classical Rockafellar's summability condition on (‖en‖) that has been extensively used so far for different versions of the proximal point algorithm. Our results extend and improve some recent ones. These results can be applied to approximate minimizers of convex functionals. Convergence rate estimates are established for a sequence approximating the minimum value of such a functional.

  13. A propagating mode extraction algorithm for microwave waveguide using variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Yin, Aijun; Ren, Hongji

    2015-09-01

    One microwave propagating mode extraction algorithm is proposed for microwave waveguide using variational mode decomposition (VMD). The reflected signal acquired by the waveguide can be seen as the mixture of the propagating mode and evanescent modes. The propagating mode contains information regarding defects and evanescent modes can be treated as noise. By using VMD, the propagating mode can be extracted. Currently, decomposition models are mostly limited by lacking mathematical theory, backward error correction not being allowed in most methods due to the recursive sifting, or the inability to properly cope with noise. In VMD, the bands have been determined adaptively and the corresponding modes are estimated concurrently. An ensemble of modes are derived, and these modes collectively reproduce the input signal while each is being smoothed after demodulation into the baseband. This proposed model is particularly robust to sampling and noise. The bridge between the physical and mathematical models is demonstrated. A coated steel defect detection experiment is conducted using an X-band open-ended rectangular waveguide to evaluate the efficacy of the VMD method. Two samples are demonstrated. The steel with hole sample has a regular and clear defect, whereas the defect of steel with peening is fuzzy. For both samples, the VMD results can accurately identify the defects.

  14. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  15. Non-equilibrium molecular dynamics simulation of nanojet injection with adaptive-spatial decomposition parallel algorithm.

    PubMed

    Shin, Hyun-Ho; Yoon, Woong-Sup

    2008-07-01

    An Adaptive-Spatial Decomposition parallel algorithm was developed to increase computation efficiency for molecular dynamics simulations of nano-fluids. Injection of a liquid argon jet with a scale of 17.6 molecular diameters was investigated. A solid annular platinum injector was also solved simultaneously with the liquid injectant by adopting a solid modeling technique which incorporates phantom atoms. The viscous heat was naturally discharged through the solids so the liquid boiling problem was avoided with no separate use of temperature controlling methods. Parametric investigations of injection speed, wall temperature, and injector length were made. A sudden pressure drop at the orifice exit causes flash boiling of the liquid departing the nozzle exit with strong evaporation on the surface of the liquids, while rendering a slender jet. The elevation of the injection speed and the wall temperature causes an activation of the surface evaporation concurrent with reduction in the jet breakup length and the drop size. PMID:19051924

  16. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  17. MOEA/D-ACO: a multiobjective evolutionary algorithm using decomposition and AntColony.

    PubMed

    Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto

    2013-12-01

    Combining ant colony optimization (ACO) and the multiobjective evolutionary algorithm (EA) based on decomposition (MOEA/D), this paper proposes a multiobjective EA, i.e., MOEA/D-ACO. Following other MOEA/D-like algorithms, MOEA/D-ACO decomposes a multiobjective optimization problem into a number of single-objective optimization problems. Each ant (i.e., agent) is responsible for solving one subproblem. All the ants are divided into a few groups, and each ant has several neighboring ants. An ant group maintains a pheromone matrix, and an individual ant has a heuristic information matrix. During the search, each ant also records the best solution found so far for its subproblem. To construct a new solution, an ant combines information from its group's pheromone matrix, its own heuristic information matrix, and its current solution. An ant checks the new solutions constructed by itself and its neighbors, and updates its current solution if it has found a better one in terms of its own objective. Extensive experiments have been conducted in this paper to study and compare MOEA/D-ACO with other algorithms on two sets of test problems. On the multiobjective 0-1 knapsack problem,MOEA/D-ACO outperforms the MOEA/D with conventional genetic operators and local search on all the nine test instances. We also demonstrate that the heuristic information matrices in MOEA/D-ACO are crucial to the good performance of MOEA/D-ACO for the knapsack problem. On the biobjective traveling salesman problem, MOEA/D-ACO performs much better than the BicriterionAnt on all the 12 test instances. We also evaluate the effects of grouping, neighborhood, and the location information of current solutions on the performance of MOEA/D-ACO. The work in this paper shows that reactive search optimization scheme, i.e., the "learning while optimizing" principle, is effective in improving multiobjective optimization algorithms. PMID:23757576

  18. A Three-level BDDC algorithm for saddle point problems

    SciTech Connect

    Tu, X.

    2008-12-10

    BDDC algorithms have previously been extended to the saddle point problems arising from mixed formulations of elliptic and incompressible Stokes problems. In these two-level BDDC algorithms, all iterates are required to be in a benign space, a subspace in which the preconditioned operators are positive definite. This requirement can lead to large coarse problems, which have to be generated and factored by a direct solver at the beginning of the computation and they can ultimately become a bottleneck. An additional level is introduced in this paper to solve the coarse problem approximately and to remove this difficulty. This three-level BDDC algorithm keeps all iterates in the benign space and the conjugate gradient methods can therefore be used to accelerate the convergence. This work is an extension of the three-level BDDC methods for standard finite element discretization of elliptic problems and the same rate of convergence is obtained for the mixed formulation of the same problems. Estimate of the condition number for this three-level BDDC methods is provided and numerical experiments are discussed.

  19. A novel algorithm for generating libration point orbits about the collinear points

    NASA Astrophysics Data System (ADS)

    Ren, Yuan; Shan, Jinjun

    2014-09-01

    This paper presents a numerical algorithm that can generate long-term libration points orbits (LPOs) and the transfer orbits from the parking orbits to the LPOs in the circular-restricted three-body problem (CR3BP) and the full solar system model without initial guesses. The families of the quasi-periodic LPOs in the CR3BP can also be constructed with this algorithm. By using the dynamical behavior of LPO, the transfer orbit from the parking orbit to the LPO is generated using a bisection method. At the same time, a short segment of the target LPO connected with the transfer orbit is obtained, then the short segment of LPO is extended by correcting the state towards its adjacent point on the stable manifold of the target LPO with differential evolution algorithm. By implementing the correction strategy repeatedly, the LPOs can be extended to any length as needed. Moreover, combining with the continuation procedure, this algorithm can be used to generate the families of the quasi-periodic LPOs in the CR3BP.

  20. Noise reduction in Doppler ultrasound signals using an adaptive decomposition algorithm.

    PubMed

    Zhang, Yufeng; Wang, Le; Gao, Yali; Chen, Jianhua; Shi, Xinling

    2007-07-01

    A novel de-noising method for improving the signal-to-noise ratio (SNR) of Doppler ultrasound blood flow signals, called the matching pursuit method, has been proposed. Using this method, the Doppler ultrasound signal was first decomposed into a linear expansion of waveforms, called time-frequency atoms, which were selected from a redundant dictionary named Gabor functions. Subsequently, a decay parameter-based algorithm was employed to determine the decomposition times. Finally, the de-noised Doppler signal was reconstructed using the selected components. The SNR improvements, the amount of the lost component in the original signal and the maximum frequency estimation precision with simulated Doppler blood flow signals, have been used to evaluate a performance comparison, based on the wavelet, the wavelet packets and the matching pursuit de-noising algorithms. From the simulation and clinical experiment results, it was concluded that the performance of the matching pursuit approach was better than those of the DWT and the WPs methods for the Doppler ultrasound signal de-noising. PMID:16996774

  1. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  2. A maximum power point tracking algorithm for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  3. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    PubMed

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. PMID:26953177

  4. A PARALIND Decomposition-Based Coherent Two-Dimensional Direction of Arrival Estimation Algorithm for Acoustic Vector-Sensor Arrays

    PubMed Central

    Zhang, Xiaofei; Zhou, Min; Li, Jianfeng

    2013-01-01

    In this paper, we combine the acoustic vector-sensor array parameter estimation problem with the parallel profiles with linear dependencies (PARALIND) model, which was originally applied to biology and chemistry. Exploiting the PARALIND decomposition approach, we propose a blind coherent two-dimensional direction of arrival (2D-DOA) estimation algorithm for arbitrarily spaced acoustic vector-sensor arrays subject to unknown locations. The proposed algorithm works well to achieve automatically paired azimuth and elevation angles for coherent and incoherent angle estimation of acoustic vector-sensor arrays, as well as the paired correlated matrix of the sources. Our algorithm, in contrast with conventional coherent angle estimation algorithms such as the forward backward spatial smoothing (FBSS) estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, not only has much better angle estimation performance, even for closely-spaced sources, but is also available for arbitrary arrays. Simulation results verify the effectiveness of our algorithm. PMID:23604030

  5. Parallel data-driven decomposition algorithm for large-scale datasets: with application to transitional boundary layers

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Schmid, Peter J.

    2016-03-01

    Many fluid flows of engineering interest, though very complex in appearance, can be approximated by low-order models governed by a few modes, able to capture the dominant behavior (dynamics) of the system. This feature has fueled the development of various methodologies aimed at extracting dominant coherent structures from the flow. Some of the more general techniques are based on data-driven decompositions, most of which rely on performing a singular value decomposition (SVD) on a formulated snapshot (data) matrix. The amount of experimentally or numerically generated data expands as more detailed experimental measurements and increased computational resources become readily available. Consequently, the data matrix to be processed will consist of far more rows than columns, resulting in a so-called tall-and-skinny (TS) matrix. Ultimately, the SVD of such a TS data matrix can no longer be performed on a single processor, and parallel algorithms are necessary. The present study employs the parallel TSQR algorithm of (Demmel et al. in SIAM J Sci Comput 34(1):206-239, 2012), which is further used as a basis of the underlying parallel SVD. This algorithm is shown to scale well on machines with a large number of processors and, therefore, allows the decomposition of very large datasets. In addition, the simplicity of its implementation and the minimum required communication makes it suitable for integration in existing numerical solvers and data decomposition techniques. Examples that demonstrate the capabilities of highly parallel data decomposition algorithms include transitional processes in compressible boundary layers without and with induced flow separation.

  6. An Algorithm for Projecting Points onto a Patched CAD Model

    SciTech Connect

    Henshaw, W D

    2001-05-29

    We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.

  7. Algorithm of the automated choice of points of the acupuncture for EHF-therapy

    NASA Astrophysics Data System (ADS)

    Lyapina, E. P.; Chesnokov, I. A.; Anisimov, Ya. E.; Bushuev, N. A.; Murashov, E. P.; Eliseev, Yu. Yu.; Syuzanna, H.

    2007-05-01

    Offered algorithm of the automated choice of points of the acupuncture for EHF-therapy. The recipe formed by algorithm of an automated choice of points for acupunctural actions has a recommendational character. Clinical investigations showed that application of the developed algorithm in EHF-therapy allows to normalize energetic state of the meridians and to effectively solve many problems of an organism functioning.

  8. USER'S GUIDE FOR MPTER, A MULTIPLE POINT GAUSSIAN DISPERSION ALGORITHM WITH OPTIONAL TERRAIN ADJUSTMENT

    EPA Science Inventory

    The information presented in this user's guide is directed to air pollution scientists interested in applying air quality simulation models. MPTER is the designation for Multiple Point source algorithm with TERrain adjustments. This algorithm is useful for estimating air quality ...

  9. Asynchronous space-time algorithm based on a domain decomposition method for structural dynamics problems on non-matching meshes

    NASA Astrophysics Data System (ADS)

    Subber, Waad; Matouš, Karel

    2016-02-01

    Large-scale practical engineering problems featuring localized phenomena often benefit from local control of mesh and time resolutions to efficiently capture the spatial and temporal scales of interest. To this end, we propose an asynchronous space-time algorithm based on a domain decomposition method for structural dynamics problems on non-matching meshes. The three-field algorithm is based on the dual-primal like domain decomposition approach utilizing the localized Lagrange multipliers along the space and time common-refinement-based interface. The proposed algorithm is parallel in nature and well suited for a heterogeneous computing environment. Moreover, two-levels of parallelism are embedded in this novel scheme. For linear dynamical problems, the algorithm is unconditionally stable, shows an optimal order of convergence with respect to space and time discretizations as well as ensures conservation of mass, momentum and energy across the non-matching grid interfaces. The method of manufactured solutions is used to verify the implementation, and an engineering application is considered, where a sandwich plate is impacted by a projectile.

  10. LIFT: a nested decomposition algorithm for solving lower block triangular linear programs. Report AMD-859. [In PL/I for IBM 370

    SciTech Connect

    Ament, D; Ho, J; Loute, E; Remmelswaal, M

    1980-06-01

    Nested decomposition of linear programs is the result of a multilevel, hierarchical application of the Dantzig-Wolfe decomposition principle. The general structure is called lower block-triangular, and permits direct accounting of long-term effects of investment, service life, etc. LIFT, an algorithm for solving lower block triangular linear programs, is based on state-of-the-art modular LP software. The algorithmic and software aspects of LIFT are outlined, and computational results are presented. 5 figures, 6 tables. (RWR)

  11. Technical Note: MRI only prostate radiotherapy planning using the statistical decomposition algorithm

    SciTech Connect

    Siversson, Carl; Nordström, Fredrik; Nilsson, Terese; Nyholm, Tufve; Jonsson, Joakim; Gunnlaugsson, Adalsteinn; Olsson, Lars E.

    2015-10-15

    Purpose: In order to enable a magnetic resonance imaging (MRI) only workflow in radiotherapy treatment planning, methods are required for generating Hounsfield unit (HU) maps (i.e., synthetic computed tomography, sCT) for dose calculations, directly from MRI. The Statistical Decomposition Algorithm (SDA) is a method for automatically generating sCT images from a single MR image volume, based on automatic tissue classification in combination with a model trained using a multimodal template material. This study compares dose calculations between sCT generated by the SDA and conventional CT in the male pelvic region. Methods: The study comprised ten prostate cancer patients, for whom a 3D T2 weighted MRI and a conventional planning CT were acquired. For each patient, sCT images were generated from the acquired MRI using the SDA. In order to decouple the effect of variations in patient geometry between imaging modalities from the effect of uncertainties in the SDA, the conventional CT was nonrigidly registered to the MRI to assure that their geometries were well aligned. For each patient, a volumetric modulated arc therapy plan was created for the registered CT (rCT) and recalculated for both the sCT and the conventional CT. The results were evaluated using several methods, including mean average error (MAE), a set of dose-volume histogram parameters, and a restrictive gamma criterion (2% local dose/1 mm). Results: The MAE within the body contour was 36.5 ± 4.1 (1 s.d.) HU between sCT and rCT. Average mean absorbed dose difference to target was 0.0% ± 0.2% (1 s.d.) between sCT and rCT, whereas it was −0.3% ± 0.3% (1 s.d.) between CT and rCT. The average gamma pass rate was 99.9% for sCT vs rCT, whereas it was 90.3% for CT vs rCT. Conclusions: The SDA enables a highly accurate MRI only workflow in prostate radiotherapy planning. The dosimetric uncertainties originating from the SDA appear negligible and are notably lower than the uncertainties

  12. A Novel Tracking Algorithm via Feature Points Matching

    PubMed Central

    Luo, Nan; Sun, Quansen; Chen, Qiang; Ji, Zexuan; Xia, Deshen

    2015-01-01

    Visual target tracking is a primary task in many computer vision applications and has been widely studied in recent years. Among all the tracking methods, the mean shift algorithm has attracted extraordinary interest and been well developed in the past decade due to its excellent performance. However, it is still challenging for the color histogram based algorithms to deal with the complex target tracking. Therefore, the algorithms based on other distinguishing features are highly required. In this paper, we propose a novel target tracking algorithm based on mean shift theory, in which a new type of image feature is introduced and utilized to find the corresponding region between the neighbor frames. The target histogram is created by clustering the features obtained in the extraction strategy. Then, the mean shift process is adopted to calculate the target location iteratively. Experimental results demonstrate that the proposed algorithm can deal with the challenging tracking situations such as: partial occlusion, illumination change, scale variations, object rotation and complex background clutter. Meanwhile, it outperforms several state-of-the-art methods. PMID:25617769

  13. Formulation and error analysis for a generalized image point correspondence algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar

    1992-01-01

    A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.

  14. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  15. Singular value decomposition-based reconstruction algorithm for seismic traveltime tomography.

    PubMed

    Song, L P; Zhang, S Y

    1999-01-01

    A reconstruction method is given for seismic transmission traveltime tomography. The method is implemented via the combinations of singular value decomposition, appropriate weighting matrices, and variable regularization parameter. The problem is scaled through the weighting matrices so that the singular spectrum is normalized. Matching the normalized singular values, a regularization parameter varies within the interval [0, 1], and linearly increases with singular value index from a small, initial value rather than a fixed one to eliminate the impacts of smaller singular values' components. The experimental results show that the proposed method is superior to the ordinary singular value decomposition (SVD) methods such as truncated SVD and Tikhonov regularization. PMID:18267533

  16. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  17. An automatic registration algorithm for the scattered point clouds based on the curvature feature

    NASA Astrophysics Data System (ADS)

    He, Bingwei; Lin, Zeming; Li, Y. F.

    2013-03-01

    Object modeling by the registration of multiple range images has important applications in reverse engineering and computer vision. In order to register multi-view scattered point clouds, a novel curvature-based automatic registration algorithm is proposed in this paper, which can solve the registration problem with partial overlapping point clouds. For two sets of scattered point clouds, the curvature of each point is estimated by using the quadratic surface fitting method. The feature points that have the maximum local curvature variations are then extracted. The initial matching points are acquired by computing the Hausdorff distance of curvature, and then the circumference shape feature of the local surface is used to obtain the accurate matching points from the initial matching points. Finally, the rotation and translation matrix are estimated by the quaternion, and an iterative algorithm is used to improve the registration accuracy. Experimental results show that the algorithm is effective.

  18. Point group identification algorithm in dynamic response analysis of nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Li, Tao; Chen, Jian-bing; Li, Jie

    2016-03-01

    The point group identification (PGI) algorithm is proposed to determine the representative point sets in response analysis of nonlinear stochastic dynamic systems. The PGI algorithm is employed to identify point groups and their feature points in an initial point set by combining subspace clustering analysis and the graph theory. Further, the representative point set of the random-variate space is determined according to the minimum generalized F-discrepancy. The dynamic responses obtained by incorporating the algorithm PGI into the probability density evolution method (PDEM) are compared with those by the Monte Carlo simulation method. The investigations indicate that the proposed method can reduce the number of the representative points, lower the generalized F-discrepancy of the representative point set, and also ensure the accuracy of stochastic structural dynamic analysis.

  19. A well-separated pairs decomposition algorithm for k-d trees implemented on multi-core architectures

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Reid, Ivan D.; Hobson, Peter R.

    2014-06-01

    Variations of k-d trees represent a fundamental data structure used in Computational Geometry with numerous applications in science. For example particle track fitting in the software of the LHC experiments, and in simulations of N-body systems in the study of dynamics of interacting galaxies, particle beam physics, and molecular dynamics in biochemistry. The many-body tree methods devised by Barnes and Hutt in the 1980s and the Fast Multipole Method introduced in 1987 by Greengard and Rokhlin use variants of k-d trees to reduce the computation time upper bounds to O(n log n) and even O(n) from O(n2). We present an algorithm that uses the principle of well-separated pairs decomposition to always produce compressed trees in O(n log n) work. We present and evaluate parallel implementations for the algorithm that can take advantage of multi-core architectures.

  20. An algorithm for point cluster generalization based on the Voronoi diagram

    NASA Astrophysics Data System (ADS)

    Yan, Haowen; Weibel, Robert

    2008-08-01

    This paper presents an algorithm for point cluster generalization. Four types of information, i.e. statistical, thematic, topological, and metric information are considered, and measures are selected to describe corresponding types of information quantitatively in the algorithm, i.e. the number of points for statistical information, the importance value for thematic information, the Voronoi neighbors for topological information, and the distribution range and relative local density for metric information. Based on these measures, an algorithm for point cluster generalization is developed. Firstly, point clusters are triangulated and a border polygon of the point clusters is obtained. By the border polygon, some pseudo points are added to the original point clusters to form a new point set and a range polygon that encloses all original points is constructed. Secondly, the Voronoi polygons of the new point set are computed in order to obtain the so-called relative local density of each point. Further, the selection probability of each point is computed using its relative local density and importance value, and then mark those will-be-deleted points as 'deleted' according to their selection probabilities and Voronoi neighboring relations. Thirdly, if the number of retained points does not satisfy that computed by the Radical Law, physically delete the points marked as 'deleted' forming a new point set, and the second step is repeated; else physically deleted pseudo points and the points marked as 'deleted', and the generalized point clusters are achieved. Owing to the use of the Voronoi diagram the algorithm is parameter free and fully automatic. As our experiments show, it can be used in the generalization of point features arranged in clusters such as thematic dot maps and control points on cartographic maps.

  1. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  2. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    NASA Astrophysics Data System (ADS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-03-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct.

  3. An efficient, robust, domain-decomposition algorithm for particle Monte Carlo

    SciTech Connect

    Brunner, Thomas A. Brantley, Patrick S.

    2009-06-01

    A previously described algorithm [T.A. Brunner, T.J. Urbatsch, T.M. Evans, N.A. Gentile, Comparison of four parallel algorithms for domain decomposed implicit Monte Carlo, Journal of Computational Physics 212 (2) (2006) 527-539] for doing domain decomposed particle Monte Carlo calculations in the context of thermal radiation transport has been improved. It has been extended to support cases where the number of particles in a time step are unknown at the beginning of the time step. This situation arises when various physical processes, such as neutron transport, can generate additional particles during the time step, or when particle splitting is used for variance reduction. Additionally, several race conditions that existed in the previous algorithm and could cause code hangs have been fixed. This new algorithm is believed to be robust against all race conditions. The parallel scalability of the new algorithm remains excellent.

  4. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    NASA Technical Reports Server (NTRS)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  5. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation

    PubMed Central

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  6. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation.

    PubMed

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  7. A path-following interior-point algorithm for linear and quadratic problems

    SciTech Connect

    Wright, S.J.

    1993-12-01

    We describe an algorithm for the monotone linear complementarity problem that converges for many positive, not necessarily feasible, starting point and exhibits polynomial complexity if some additional assumptions are made on the starting point. If the problem has a strictly complementary solution, the method converges subquadratically. We show that the algorithm and its convergence extend readily to the mixed monotone linear complementarity problem and, hence, to all the usual formulations of the linear programming and convex quadratic programming problems.

  8. Iterative most-likely point registration (IMLP): a robust algorithm for computing optimal shape alignment.

    PubMed

    Billings, Seth D; Boctor, Emad M; Taylor, Russell H

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP's probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  9. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    PubMed Central

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  10. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms

    NASA Astrophysics Data System (ADS)

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration.

  11. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms.

    PubMed

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration. PMID:27036770

  12. Improved scaling of time-evolving block-decimation algorithm through reduced-rank randomized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Tamascelli, D.; Rosenbach, R.; Plenio, M. B.

    2015-06-01

    When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the time-evolving block-decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the singular value decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied and demonstrate that for those systems RRSVD delivers results as accurate as state-of-the-art deterministic SVD routines.

  13. Parameter Space of Fixed Points of the Damped Driven Pendulum Susceptible to Control of Chaos Algorithms

    NASA Astrophysics Data System (ADS)

    Dittmore, Andrew; Trail, Collin; Olsen, Thomas; Wiener, Richard J.

    2003-11-01

    We have previously demonstrated the experimental control of chaos in a Modified Taylor-Couette system with hourglass geometry( Richard J. Wiener et al), Phys. Rev. Lett. 83, 2340 (1999).. Identifying fixed points susceptible to algorithms for the control of chaos is key. We seek to learn about this process in the accessible numerical model of the damped, driven pendulum. Following Baker(Gregory L. Baker, Am. J. Phys. 63), 832 (1995)., we seek points susceptible to the OGY(E. Ott, C. Grebogi, and J. A. Yorke, Phys. Rev. Lett. 64), 1196 (1990). algorithm. We automate the search for fixed points that are candidates for control. We present comparisons of the space of candidate fixed points with the bifurcation diagrams and Poincare sections of the system. We demonstrate control at fixed points which do not appear on the attractor. We also show that the control algorithm may be employed to shift the system between non-communicating branches of the attractor.

  14. Parallelized Characteristic Basis Finite Element Method (CBFEM-MPI)-A non-iterative domain decomposition algorithm for electromagnetic scattering problems

    SciTech Connect

    Ozgun, Ozlem Mittra, Raj; Kuzuoglu, Mustafa

    2009-04-01

    In this paper, we introduce a parallelized version of a novel, non-iterative domain decomposition algorithm, called Characteristic Basis Finite Element Method (CBFEM-MPI), for efficient solution of large-scale electromagnetic scattering problems, by utilizing a set of specially defined characteristic basis functions (CBFs). This approach is based on the decomposition of the computational domain into a number of non-overlapping subdomains wherein the CBFs are generated by employing a novel procedure, which differs from all those that have been used in the past. Clearly, the CBFs are obtained by calculating the fields radiated by a finite number of dipole-type sources, which are placed hypothetically along the boundary of the conducting object. The major advantages of the proposed technique are twofold: (i) it provides a substantial reduction in the matrix size, and thus, makes use of direct solvers efficiently and (ii) it enables the utilization of parallel processing techniques that considerably decrease the overall computation time. We illustrate the application of the proposed approach via several 3D electromagnetic scattering problems.

  15. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature. PMID:12502302

  16. Double-patterning decomposition, design compliance, and verification algorithms at 32nm hp

    NASA Astrophysics Data System (ADS)

    Tritchkov, Alexander; Glotov, Petr; Komirenko, Sergiy; Sahouria, Emile; Torres, Andres; Seoud, Ahmed; Wiaux, Vincent

    2008-10-01

    Double patterning (DP) technology is one of the main candidates for RET of critical layers at 32nm hp. DP technology is a strong RET technique that must be considered throughout the IC design and post tapeout flows. We present a complete DP technology strategy including a DRC/DFM component, physical synthesis support and mask synthesis. In particular, the methodology contains: - A DRC-like layout DP compliance and design verification functions; - A parameterization scheme that codifies manufacturing knowledge and capability; - Judicious use of physical effect simulation to improve double-patterning quality; - An efficient, high capacity mask synthesis function for post-tapeout processing; - A verification function to determine the correctness and qualify of a DP solution; Double patterning technology requires decomposition of the design to relax the pitch and effectively allows processing with k1 factors smaller than the theoretical Rayleigh limit of 0.25. The traditional DP processes Litho-Etch-Litho- Etch (LELE) [1] requires an additional develop and etch step, which eliminates the resolution degradation which occurs in multiple exposure processed in the same resist layer. The theoretical k1 for a double-patterning technology applied to a 32nm half-pitch design using a 1.35NA 193nm imaging system is 0.44, whereas the k1 for a single-patterning of this same design would be 0.22 [2], which is sub-resolution. This paper demonstrates the methods developed at Mentor Graphics for double patterning design compliance and decomposition in an effort to minimize the impact of mask-to-mask registration and process variance. It also demonstrates verification solution implementation in the chip design flow and post-tapeout flow.

  17. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  18. A fast algorithm for exact sequence search in biological sequences using polyphase decomposition

    PubMed Central

    Srikantha, Abhilash; Bopardikar, Ajit S.; Kaipa, Kalyan Kumar; Venkataraman, Parthasarathy; Lee, Kyusang; Ahn, TaeJin; Narayanan, Rangavittal

    2010-01-01

    Motivation: Exact sequence search allows a user to search for a specific DNA subsequence in a larger DNA sequence or database. It serves as a vital block in many areas such as Pharmacogenetics, Phylogenetics and Personal Genomics. As sequencing of genomic data becomes increasingly affordable, the amount of sequence data that must be processed will also increase exponentially. In this context, fast sequence search algorithms will play an important role in exploiting the information contained in the newly sequenced data. Many existing algorithms do not scale up well for large sequences or databases because of their high-computational costs. This article describes an efficient algorithm for performing fast searches on large DNA sequences. It makes use of hash tables of Q-grams that are constructed after downsampling the database, to enable efficient search and memory use. Time complexity for pattern search is reduced using beam pruning techniques. Theoretical complexity calculations and performance figures are presented to indicate the potential of the proposed algorithm. Contact: s.abhilash@samsung.com; ajit.b@samsung.com PMID:20823301

  19. Filtering of LIDAR Point Cloud Using a Strip Based Algorithm in Residential Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Arefi, H.; Gharib, Z.

    2014-10-01

    Several algorithms have been developed to automatically detect the bare earth in LIDAR point clouds referred to as filtering. Previous experimental study on filtering algorithms determined that in flat and uncomplicated landscapes, algorithms tend to do well. Significant differences in accuracies of filtering appear in landscapes containing steep slopes and discontinuities. A solution for this problem is the segmentation of ALS point clouds. In this paper a new segmentation has been developed. The algorithm starts with first slicing a point cloud into contiguous and parallel profiles in different directions. Then the points in each profile are segmented into polylines based on distance and elevation proximity. The segmentation in each profile yields polylines. The polylines are then linked together through their common points to obtain surface segments. At the final stage, the data is partitioned into some windows in which the strips are exploited to analysis the points with regard to the height differences through them. In this case the whole data could be fully segmented into ground and non-ground measurements, sequentially via the strips which make the algorithm fast to implement.

  20. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  1. A point-cloud-based multiview stereo algorithm for free-viewpoint video.

    PubMed

    Liu, Yebin; Dai, Qionghai; Xu, Wenli

    2010-01-01

    This paper presents a robust multiview stereo (MVS) algorithm for free-viewpoint video. Our MVS scheme is totally point-cloud-based and consists of three stages: point cloud extraction, merging, and meshing. To guarantee reconstruction accuracy, point clouds are first extracted according to a stereo matching metric which is robust to noise, occlusion, and lack of texture. Visual hull information, frontier points, and implicit points are then detected and fused with point fidelity information in the merging and meshing steps. All aspects of our method are designed to counteract potential challenges in MVS data sets for accurate and complete model reconstruction. Experimental results demonstrate that our technique produces the most competitive performance among current algorithms under sparse viewpoint setups according to both static and motion MVS data sets. PMID:20224136

  2. A Jitter-Mitigating High Gain Antenna Pointing Algorithm for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia; Blaurock, Carl

    2007-01-01

    This paper details a High Gain Antenna (HGA) pointing algorithm which mitigates jitter during the motion of the antennas on the Solar Dynamics Observatory (SDO) spacecraft. SDO has two HGAs which point towards the Earth and send data to a ground station at a high rate. These antennas are required to track the ground station during the spacecraft Inertial and Science modes, which include periods of inertial Sunpointing as well as calibration slews. The HGAs also experience handoff seasons, where the antennas trade off between pointing at the ground station and pointing away from the Earth. The science instruments on SDO require fine Sun pointing and have a very low jitter tolerance. Analysis showed that the nominal tracking and slewing motions of the antennas cause enough jitter to exceed the HGA portion of the jitter budget. The HGA pointing control algorithm was expanded from its original form as a means to mitigate the jitter.

  3. A new constrained fixed-point algorithm for ordering independent components

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjuan; Guo, Chonghui; Shi, Zhenwei; Feng, Enmin

    2008-10-01

    Independent component analysis (ICA) aims to recover a set of unknown mutually independent components (ICs) from their observed mixtures without knowledge of the mixing coefficients. In the classical ICA model there exists ICs' indeterminacy on permutation and dilation. Constrained ICA is one of methods for solving this problem through introducing constraints into the classical ICA model. In this paper we first present a new constrained ICA model which composed of three parts: a maximum likelihood criterion as an objective function, statistical measures as inequality constraints and the normalization of demixing matrix as equality constraints. Next, we incorporate the new fixed-point (newFP) algorithm into this constrained ICA model to construct a new constrained fixed-point algorithm. Computation simulations on synthesized signals and speech signals demonstrate that this combination both can eliminate ICs' indeterminacy to a certain extent, and can provide better performance. Moreover, comparison results with the existing algorithm verify the efficiency of our new algorithm furthermore, and show that it is more simple to implement than the existing algorithm due to its advantage of not using the learning rate. Finally, this new algorithm is also applied for the real-world fetal ECG data, experiment results further indicate the efficiency of the new constrained fixed-point algorithm.

  4. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  5. Seismic small-scale discontinuity sparsity-constraint inversion method using a penalty decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jingtao; Peng, Suping; Du, Wenfeng

    2016-02-01

    We consider sparsity-constraint inversion method for detecting seismic small-scale discontinuities, such as edges, faults and cavities, which provide rich information about petroleum reservoirs. However, where there is karstification and interference caused by macro-scale fault systems, these seismic small-scale discontinuities are hard to identify when using currently available discontinuity-detection methods. In the subsurface, these small-scale discontinuities are separately and sparsely distributed and their seismic responses occupy a very small part of seismic image. Considering these sparsity and non-smooth features, we propose an effective L 2-L 0 norm model for improvement of their resolution. First, we apply a low-order plane-wave destruction method to eliminate macro-scale smooth events. Then, based the residual data, we use a nonlinear structure-enhancing filter to build a L 2-L 0 norm model. In searching for its solution, an efficient and fast convergent penalty decomposition method is employed. The proposed method can achieve a significant improvement in enhancing seismic small-scale discontinuities. Numerical experiment and field data application demonstrate the effectiveness and feasibility of the proposed method in studying the relevant geology of these reservoirs.

  6. Iterative stability analysis of spatial domain decomposition based on block Jacobi algorithm for the diamond-difference scheme

    NASA Astrophysics Data System (ADS)

    Anistratov, Dmitriy Y.; Azmy, Yousry Y.

    2015-09-01

    We study convergence of the integral transport matrix method (ITMM) based on a parallel block Jacobi (PBJ) iterative strategy for solving particle transport problems. The ITMM is a spatial domain decomposition method proposed for massively parallel computations. A Fourier analysis of the PBJ-based iterations applied to SN diamond-difference equations in 1D slab and 2D Cartesian geometries is performed. It is carried out for infinite-medium problems with homogeneous material properties. To analyze the performance of the ITMM with the PBJ algorithm and evaluate its potential in scalability we consider a limiting case of one spatial cell per subdomain. The analysis shows that in such limit the spectral radius of the iteration method is one without regard to values of the scattering ratio and optical thickness of the spatial cells. This implies lack of convergence in infinite medium. Numerical results of finite-medium problems are presented. They demonstrate effects of finite size of spatial domain on the performance of the iteration algorithm as well as its asymptotic behavior when the extent of the spatial domain increases. These numerical experiments also show that for finite domains iterative convergence to a finite criterion is achievable in a multiple of the sum of number of cells in each dimension.

  7. A Gabor subband decomposition ICA and MRF hybrid algorithm for infrared image reconstruction from subpixel shifted sequences

    NASA Astrophysics Data System (ADS)

    Yi-nan, Chen; Wei-qi, Jin; Ling-Xue, Wang; Lei, Zhao; Hong-sheng, Yu

    2009-03-01

    An image blind reconstruction, as a blind source separation problem, has been solved recently by independent component analysis (ICA). Based on ICA theory, in this paper, a high resolution image is reconstructed from low resolution and subpixel shifted sequences captured by infrared microscan imaging system. The algorithm has the attractive feature that neither the prior knowledge of the blur kernel nor the value of subpixel misregistrations between the input channels is required. The statistical independence in the image domain is improved by the multiscale Gabor subband decompositions, which are designed for the best ability to cover the whole spatial frequency and to avoid overlapping between the subbands. The mutual information is employed to locate a subband with the least dependent components. In terms of MAP estimator, we combine the super-Gaussian with Markov random field to form a hybrid image distribution. This strategy helps to estimate the separating matrix reasonable to extract the sources with the image properties, that is, sharp enough as well as correlative in local area. The proposed algorithm is capable of performing high resolution image sources which are not strictly independent, and its viability is proved by the computer simulations and real experiments.

  8. Dominant feature selection for the fault diagnosis of rotary machines using modified genetic algorithm and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Yan, Jihong; de Silva, Clarence W.

    2015-05-01

    This paper develops a novel dominant feature selection method using a genetic algorithm with a dynamic searching strategy. It is applied in the search for the most representative features in rotary mechanical fault diagnosis, and is shown to improve the classification performance with fewer features. First, empirical mode decomposition (EMD) is employed to decompose a vibration signal into intrinsic mode functions (IMFs) which represent the signal characteristic with sample oscillatory modes. Then, a modified genetic algorithm with variable-range encoding and dynamic searching strategy is used to establish relationships between optimized feature subsets and the classification performance. Next, a statistical model that uses receiver operating characteristic (ROC) is developed to select dominant features. Finally, support vector machine (SVM) is used to classify different fault patterns. Two real-world problems, rotor-unbalance vibration and bearing corrosion, are employed to evaluate the proposed feature selection scheme and fault diagnosis system. Statistical results obtained by analyzing the two problems, and comparative studies with five well-known feature selection techniques, demonstrate that the method developed in this paper can achieve improvements in identification accuracy with lower feature dimensionality. In addition, the results indicate that the proposed method is a promising tool to select dominant features in rotary machinery fault diagnosis.

  9. Performance Evaluation of Different Ground Filtering Algorithms for Uav-Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Serifoglu, C.; Gungor, O.; Yilmaz, V.

    2016-06-01

    Digital Elevation Model (DEM) generation is one of the leading application areas in geomatics. Since a DEM represents the bare earth surface, the very first step of generating a DEM is to separate the ground and non-ground points, which is called ground filtering. Once the point cloud is filtered, the ground points are interpolated to generate the DEM. LiDAR (Light Detection and Ranging) point clouds have been used in many applications thanks to their success in representing the objects they belong to. Hence, in the literature, various ground filtering algorithms have been reported to filter the LiDAR data. Since the LiDAR data acquisition is still a costly process, using point clouds generated from the UAV images to produce DEMs is a reasonable alternative. In this study, point clouds with three different densities were generated from the aerial photos taken from a UAV (Unmanned Aerial Vehicle) to examine the effect of point density on filtering performance. The point clouds were then filtered by means of five different ground filtering algorithms as Progressive Morphological 1D (PM1D), Progressive Morphological 2D (PM2D), Maximum Local Slope (MLS), Elevation Threshold with Expand Window (ETEW) and Adaptive TIN (ATIN). The filtering performance of each algorithm was investigated qualitatively and quantitatively. The results indicated that the ATIN and PM2D algorithms showed the best overall ground filtering performances. The MLS and ETEW algorithms were found as the least successful ones. It was concluded that the point clouds generated from the UAVs can be a good alternative for LiDAR data.

  10. Registration of range data using a hybrid simulated annealing and iterative closest point algorithm

    SciTech Connect

    LUCK,JASON; LITTLE,CHARLES Q.; HOFF,WILLIAM

    2000-04-17

    The need to register data is abundant in applications such as: world modeling, part inspection and manufacturing, object recognition, pose estimation, robotic navigation, and reverse engineering. Registration occurs by aligning the regions that are common to multiple images. The largest difficulty in performing this registration is dealing with outliers and local minima while remaining efficient. A commonly used technique, iterative closest point, is efficient but is unable to deal with outliers or avoid local minima. Another commonly used optimization algorithm, simulated annealing, is effective at dealing with local minima but is very slow. Therefore, the algorithm developed in this paper is a hybrid algorithm that combines the speed of iterative closest point with the robustness of simulated annealing. Additionally, a robust error function is incorporated to deal with outliers. This algorithm is incorporated into a complete modeling system that inputs two sets of range data, registers the sets, and outputs a composite model.

  11. An optimized structure on FPGA of key point description in SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Chenyu; Peng, Jinlong; Zhu, En; Zou, Yuxin

    2015-12-01

    SIFT algorithm is one of the most significant and effective algorithms to describe the features of image in the field of image matching. To implement SIFT algorithm to hardware environment is apparently considerable and difficult. In this paper, we mainly discuss the realization of Key Point Description in SIFT algorithm, along with Matching process. In Key Point Description, we have proposed a new method of generating histograms, to avoid the rotation of adjacent regions and insure the rotational invariance. In Matching, we replace conventional Euclidean distance with Hamming distance. The results of the experiments fully prove that the structure we propose is real-time, accurate, and efficient. Future work is still needed to improve its performance in harsher conditions.

  12. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform.

    PubMed

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A; Kiefer, Richard; Rasmussen, Luke V; Pathak, Jyotishman; Denny, Joshua C; Thompson, William K

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  13. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform

    PubMed Central

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  14. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  15. Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm

    NASA Astrophysics Data System (ADS)

    Kaczałek, B.; Borkowski, A.

    2016-06-01

    The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.

  16. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  17. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration

    PubMed Central

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  18. Modified Cholesky factorizations in interior-point algorithms for linear programming.

    SciTech Connect

    Wright, S.; Mathematics and Computer Science

    1999-01-01

    We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.

  19. Prostate tissue decomposition via DECT using the model based iterative image reconstruction algorithm DIRA

    NASA Astrophysics Data System (ADS)

    Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Westin, Robin; Alm Carlsson, Gudrun

    2014-03-01

    Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.

  20. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  1. Stitching algorithm of the images acquired from different points of fixation

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Pismenskova, M. M.

    2015-02-01

    Image mosaicing is the act of combining two or more images and is used in many applications in computer vision, image processing, and computer graphics. It aims to combine images such that no obstructive boundaries exist around overlapped regions and to create a mosaic image that exhibits as little distortion as possible from the original images. Most of the existing algorithms are the computationally complex and don't show good results always in obtaining of the stitched images, which are different: scale, light, various free points of view and others. In this paper we consider an algorithm which allows increasing the speed of processing in the case of stitching high-resolution images. We reduced the computational complexity used an edge image analysis and saliency map on high-detailisation areas. On detected areas are determined angles of rotation, scaling factors, the coefficients of the color correction and transformation matrix. We define key points using SURF detector and ignore false correspondences based on correlation analysis. The proposed algorithm allows to combine images from free points of view with the different color balances, time shutter and scale. We perform a comparative study and show that statistically, the new algorithm deliver good quality results compared to existing algorithms.

  2. CASH algorithm versus 3-point checklist and its modified version in evaluation of melanocytic pigmented skin lesions: The 4-point checklist.

    PubMed

    di Meo, Nicola; Stinco, Giuseppe; Bonin, Serena; Gatti, Alessandro; Trevisini, Sara; Damiani, Giovanni; Vichi, Silvia; Trevisan, Giusto

    2016-06-01

    Dermoscopy, in expert hands, increases accuracy, sensitivity and specificity in diagnosis of pigmented skin lesions of a single operator, compared with clinical examination. Simplified algorithmic methods have been developed to help less expert dermoscopists in diagnosis of melanocytic lesions. This study included 125 melanocytic skin lesions divided into melanocytic nevi, dysplastic nevi and thin melanomas (<1 mm). We compared the 3-point checklist and CASH algorithm to analyze different pigmented skin lesions. Based on preliminary results, we proposed a new modified algorithm, called the 4-point checklist, whose accuracy is similar to the CASH algorithm and whose simplicity is similar to the 3-point checklist. PMID:26589251

  3. Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localization error.

    PubMed

    Maier-Hein, Lena; Franz, Alfred M; dos Santos, Thiago R; Schmidt, Mirko; Fangerau, Markus; Meinzer, Hans-Peter; Fitzpatrick, J Michael

    2012-08-01

    Since its introduction in the early 1990s, the Iterative Closest Point (ICP) algorithm has become one of the most well-known methods for geometric alignment of 3D models. Given two roughly aligned shapes represented by two point sets, the algorithm iteratively establishes point correspondences given the current alignment of the data and computes a rigid transformation accordingly. From a statistical point of view, however, it implicitly assumes that the points are observed with isotropic Gaussian noise. In this paper, we show that this assumption may lead to errors and generalize the ICP such that it can account for anisotropic and inhomogenous localization errors. We 1) provide a formal description of the algorithm, 2) extend it to registration of partially overlapping surfaces, 3) prove its convergence, 4) derive the required covariance matrices for a set of selected applications, and 5) present means for optimizing the runtime. An evaluation on publicly available surface meshes as well as on a set of meshes extracted from medical imaging data shows a dramatic increase in accuracy compared to the original ICP, especially in the case of partial surface registration. As point-based surface registration is a central component in various applications, the potential impact of the proposed method is high. PMID:22184256

  4. A hybrid algorithm for multiple change-point detection in continuous measurements

    NASA Astrophysics Data System (ADS)

    Priyadarshana, W. J. R. M.; Polushina, T.; Sofronov, G.

    2013-10-01

    Array comparative genomic hybridization (aCGH) is one of the techniques that can be used to detect copy number variations in DNA sequences. It has been identified that abrupt changes in the human genome play a vital role in the progression and development of many diseases. We propose a hybrid algorithm that utilizes both the sequential techniques and the Cross-Entropy method to estimate the number of change points as well as their locations in aCGH data. We applied the proposed hybrid algorithm to both artificially generated data and real data to illustrate the usefulness of the methodology. Our results show that the proposed algorithm is an effective method to detect multiple change-points in continuous measurements.

  5. Point process algorithm: a new Bayesian approach for TPF-I planet signal extraction

    NASA Technical Reports Server (NTRS)

    Velusamy, T.; Marsh, K. A.; Ware, B.

    2005-01-01

    TPF-I capability for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm, a Bayesian technique for estracting planetary signals using the sine/cosine chopped outputs of a dual nulling interferometer.

  6. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  7. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  8. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  9. An affine point-set and line invariant algorithm for photo-identification of gray whales

    NASA Astrophysics Data System (ADS)

    Chandan, Chandan; Kehtarnavaz, Nasser; Hillman, Gilbert; Wursig, Bernd

    2004-05-01

    This paper presents an affine point-set and line invariant algorithm within a statistical framework, and its application to photo-identification of gray whales (Eschrichtius robustus). White patches (blotches) appearing on a gray whale's left and right flukes (the flattened broad paddle-like tail) constitute unique identifying features and have been used here for individual identification. The fluke area is extracted from a fluke image via the live-wire edge detection algorithm, followed by optimal thresholding of the fluke area to obtain the blotches. Affine point-set and line invariants of the blotch points are extracted based on three reference points, namely the left and right tips and the middle notch-like point on the fluke. A set of statistics is derived from the invariant values and used as the feature vector representing a database image. The database images are then ranked depending on the degree of similarity between a query and database feature vectors. The results show that the use of this algorithm leads to a reduction in the amount of manual search that is normally done by marine biologists.

  10. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information

    PubMed Central

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  11. A target location and pointing algorithm for a three-axis stabilized line scanner (AMIDARS)

    NASA Astrophysics Data System (ADS)

    Algrain, Marcelo C.

    1990-09-01

    An algorithm is presented for calculating the location of a target and for pointing other imaging sensors to it, given the position of an aircraft, its attitude, and its altitude and the gimbal angles of the stabilized platform. The algorithm uses geometric relationships to define the line of sight (LOS) direction in inertial space and to determine the position of the center of a scan line where the LOS intersects the ground. The direction of a scale line passing through that point is also calculated completely defining the location of any target on a scan line. The ground dimensions obtained form this procedure are then related to a point of known latitude and longitude to define the overall target location.

  12. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information.

    PubMed

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  13. Sequential structural damage diagnosis algorithm using a change point detection method

    NASA Astrophysics Data System (ADS)

    Noh, H.; Rajagopal, R.; Kiremidjian, A. S.

    2013-11-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  14. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    NASA Technical Reports Server (NTRS)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  15. Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Ianculescu, G. D.; Klop, J. J.

    1992-01-01

    Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom are designed using a continuous rigid body model of the solar array gimbal assembly containing both linear and nonlinear dynamics due to various friction components. The robustness of the design solution is examined by performing a series of sensitivity analysis studies. Adaptive control strategies are examined in order to compensate for the unfavorable effect of static nonlinearities, such as dead-zone uncertainties.

  16. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions.

    PubMed

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A; Bishop, Logan D C; Kelly, Kevin F; Landes, Christy F

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  17. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  18. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    NASA Astrophysics Data System (ADS)

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-08-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.

  19. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    PubMed Central

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  20. Extension of an iterative closest point algorithm for simultaneous localization and mapping in corridor environments

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua

    2016-03-01

    Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.

  1. An Error Analysis of the Phased Array Antenna Pointing Algorithm for STARS Flight Demonstration No. 2

    NASA Technical Reports Server (NTRS)

    Carney, Michael P.; Simpson, James C.

    2005-01-01

    STARS is a multicenter NASA project to determine the feasibility of using space-based assets, such as the Tracking and Data Relay Satellite System (TDRSS) and Global Positioning System (GPS), to increase flexibility (e.g. increase the number of possible launch locations and manage simultaneous operations) and to reduce operational costs by decreasing the need for ground-based range assets and infrastructure. The STARS project includes two major systems: the Range Safety and Range User systems. The latter system uses broadband communications (125 kbps to 500 kbps) for voice, video, and vehicle/payload data. Flight Demonstration #1 revealed the need to increase the data rate of the Range User system. During Flight Demo #2, a Ku-band antenna will generate a higher data rate and will be designed with an embedded pointing algorithm to guarantee that the antenna is pointed directly at TDRS. This algorithm will utilize the onboard position and attitude data to point the antenna to TDRS within a 2-degree full-angle beamwidth. This report investigates how errors in aircraft position and attitude, along with errors in satellite position, propagate into the overall pointing vector.

  2. Floating-Point Units and Algorithms for field-programmable gate arrays

    Energy Science and Technology Software Center (ESTSC)

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of themore » BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and we are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used

  3. Floating-Point Units and Algorithms for field-programmable gate arrays

    SciTech Connect

    Underwood, Keith D.; Hemmert, K. Scott

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and we are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and

  4. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean

    2016-05-01

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  5. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  6. Artifact Removal from Biosignal using Fixed Point ICA Algorithm for Pre-processing in Biometric Recognition

    NASA Astrophysics Data System (ADS)

    Mishra, Puneet; Singla, Sunil Kumar

    2013-01-01

    In the modern world of automation, biological signals, especially Electroencephalogram (EEG) and Electrocardiogram (ECG), are gaining wide attention as a source of biometric information. Earlier studies have shown that EEG and ECG show versatility with individuals and every individual has distinct EEG and ECG spectrum. EEG (which can be recorded from the scalp due to the effect of millions of neurons) may contain noise signals such as eye blink, eye movement, muscular movement, line noise, etc. Similarly, ECG may contain artifact like line noise, tremor artifacts, baseline wandering, etc. These noise signals are required to be separated from the EEG and ECG signals to obtain the accurate results. This paper proposes a technique for the removal of eye blink artifact from EEG and ECG signal using fixed point or FastICA algorithm of Independent Component Analysis (ICA). For validation, FastICA algorithm has been applied to synthetic signal prepared by adding random noise to the Electrocardiogram (ECG) signal. FastICA algorithm separates the signal into two independent components, i.e. ECG pure and artifact signal. Similarly, the same algorithm has been applied to remove the artifacts (Electrooculogram or eye blink) from the EEG signal.

  7. Steering quantum dynamics via bang-bang control: Implementing optimal fixed-point quantum search algorithm

    NASA Astrophysics Data System (ADS)

    Bhole, Gaurav; Anjusha, V. S.; Mahesh, T. S.

    2016-04-01

    A robust control over quantum dynamics is of paramount importance for quantum technologies. Many of the existing control techniques are based on smooth Hamiltonian modulations involving repeated calculations of basic unitaries resulting in time complexities scaling rapidly with the length of the control sequence. Here we show that bang-bang controls need one-time calculation of basic unitaries and hence scale much more efficiently. By employing a global optimization routine such as the genetic algorithm, it is possible to synthesize not only highly intricate unitaries, but also certain nonunitary operations. We demonstrate the unitary control through the implementation of the optimal fixed-point quantum search algorithm in a three-qubit nuclear magnetic resonance (NMR) system. Moreover, by combining the bang-bang pulses with the crusher gradients, we also demonstrate nonunitary transformations of thermal equilibrium states into effective pure states in three- as well as five-qubit NMR systems.

  8. TU-F-18A-04: Use of An Image-Based Material-Decomposition Algorithm for Multi-Energy CT to Determine Basis Material Densities

    SciTech Connect

    Li, Z; Leng, S; Yu, L; McCollough, C

    2014-06-15

    Purpose: Published methods for image-based material decomposition with multi-energy CT images have required the assumption of volume conservation or accurate knowledge of the x-ray spectra and detector response. The purpose of this work was to develop an image-based material-decomposition algorithm that can overcome these limitations. Methods: An image-based material decomposition algorithm was developed that requires only mass conservation (rather than volume conservation). With this method, using multi-energy CT measurements made with n=4 energy bins, the mass density of each basis material and of the mixture can be determined without knowledge of the tube spectra and detector response. A digital phantom containing 12 samples of mixtures from water, calcium, iron, and iodine was used in the simulation (Siemens DRASIM). The calibration was performed by using pure materials at each energy bin. The accuracy of the technique was evaluated in noise-free and noisy data under the assumption of an ideal photon-counting detector. Results: Basis material densities can be estimated accurately by either theoretic calculation or calibration with known pure materials. The calibration approach requires no prior information about the spectra and detector response. Regression analysis of theoretical values versus estimated values results in excellent agreement for both noise-free and noisy data. For the calibration approach, the R-square values are 0.9960+/−0.0025 and 0.9476+/−0.0363 for noise-free and noisy data, respectively. Conclusion: From multi-energy CT images with n=4 energy bins, the developed image-based material decomposition method accurately estimated 4 basis material density (3 without k-edge and 1 with in the range of the simulated energy bins) even without any prior information about spectra and detector response. This method is applicable to mixtures of solutions and dissolvable materials, where volume conservation assumptions do not apply. CHM receives

  9. Dynamic connectivity detection: an algorithm for determining functional connectivity change points in fMRI data

    PubMed Central

    Xu, Yuting; Lindquist, Martin A.

    2015-01-01

    Recently there has been an increased interest in using fMRI data to study the dynamic nature of brain connectivity. In this setting, the activity in a set of regions of interest (ROIs) is often modeled using a multivariate Gaussian distribution, with a mean vector and covariance matrix that are allowed to vary as the experiment progresses, representing changing brain states. In this work, we introduce the Dynamic Connectivity Detection (DCD) algorithm, which is a data-driven technique to detect temporal change points in functional connectivity, and estimate a graph between ROIs for data within each segment defined by the change points. DCD builds upon the framework of the recently developed Dynamic Connectivity Regression (DCR) algorithm, which has proven efficient at detecting changes in connectivity for problems consisting of a small to medium (< 50) number of regions, but which runs into computational problems as the number of regions becomes large (>100). The newly proposed DCD method is faster, requires less user input, and is better able to handle high-dimensional data. It overcomes the shortcomings of DCR by adopting a simplified sparse matrix estimation approach and a different hypothesis testing procedure to determine change points. The application of DCD to simulated data, as well as fMRI data, illustrates the efficacy of the proposed method. PMID:26388711

  10. Bayesian inference of decomposition rate of soil organic carbon using a turnover model and a hybrid method of particle filter and MH algorithm

    NASA Astrophysics Data System (ADS)

    Sakurai, G.; Jomura, M.; Yonemura, S.; Iizumi, T.; Shirato, Y.; Yokozawa, M.

    2010-12-01

    The soils of terrestrial ecosystems accumulate large amounts of carbon and the response of soil organic carbon (SOC) to global warming is of great concern in projections of future carbon cycling. While many theoretical and experimental studies have suggested that the decomposition rates of soil organic matters depend upon the physical and chemical conditions, land managements and so on, there has not yet been consensus in the dependencies. Most of the soil carbon turnover models for describing the SOC dynamics do not assume the differences in decomposition rates. The purpose of this study is to evaluate the decomposition rates of SOC based on a soil carbon turnover model, RothC, which describes SOC dynamics dividing it into compartments with different decomposition rates. In this study, reflecting that decomposition rate could change with time due to the fertility management in arable land, we used time-dependent Bayesian inference methods to allow time-change variation of the parameters. Thus, we used a hybrid method of particle filtering methods and MH algorithm. We applied this method to datasets obtained from three long-term experiments on time changes in total SOC at five sites over the Japan mainland. For each dataset, three treatments were examined: no N applied, chemical fertilizer applied, and chemical fertilizer and farmyard manure applied. We estimated parameters on the temperature and water dependent functions as well as the intrinsic decomposition rate for each compartment of RothC and for each treatment. As a result, it was shown that the temperature dependencies tended to decreased with the decomposability of the compartment, i.e. lower temperature dependency for more recalcitrant compartment of the model. On the other hand, the water dependencies were not determined with the SOC turnover rates of the compartments. Additionally, the intrinsic decomposition rates tended to increase with time especially in no N applied treatment. This result reflects

  11. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    NASA Astrophysics Data System (ADS)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  12. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  13. USER'S GUIDE FOR PAL 2.0: A GAUSSIAN-PLUME ALGORITHM FOR POINT, AREA, AND LINE SOURCES

    EPA Science Inventory

    PAL is an acronym for the Point, Area, and Line source algorithm. PAL is a method of estimating short-term dispersion using Gaussian-plume steady state assumptions. The algorithm can be used for estimating concentrations of non-reactive pollutants at 99 receptors for averaging ti...

  14. Using SDO and GONG as Calibration References for a New Telescope Pointing Algorithm

    NASA Astrophysics Data System (ADS)

    Staiger, J.

    2013-12-01

    Long duration observations are a basic requirement for most types of helioseismic measurements. Pointing stability and the quality of guiding is thus an important issue with respect to the spatio-temporal analysis of any velocity datasets. Existing pointing tools and correlation-tracking devices will help to remove most of the spatial deviations building up during an observation with time. Yet most ground- and space-based high-resolution solar telescopes may be subject to slow image-plane drift that cannot be compensated for by guiding and which may accumulate to displacements of 10″ or more during a 10-hour recording. We have developed a new pointing model for solar telescopes that may overcome these inherent guiding-limitations. We have tested the model at the Vacuum Tower Telescope (VTT), Tenerife. We are using SDO and GONG full-disk imaging as a calibration reference. We describe the algorithms developed and used during the tests. We present our first results. We describe possible future applications as to be implemented at the VTT. So far, improvements over classical limb-guider systems by a factor of 10 or more seem possible.

  15. The Advantage of Implementing Martin's Noise Reduction Algorithm in Critical Bands Using Wavelet Packet Decomposition and Hilbert Transform

    NASA Astrophysics Data System (ADS)

    Omidi, Milad; Derakhshan, Nima; Hassan Savoji, Mohammad

    In this paper we address the problem of enhancing single channel speech signal corrupted with additive background noise. We present a new scheme which utilizes a different time frequency representation along with the psychoacoustic features of human ear and combines these features with the well-known noise estimation method of minimum tracking. Instead of Fourier transform, we use a perceptual wavelet packet decomposition of speech, and perform spectral tracking and filtering on the envelope of the analytic signal.

  16. An algorithm for automatic detection of pole-like street furniture objects from Mobile Laser Scanner point clouds

    NASA Astrophysics Data System (ADS)

    Cabo, C.; Ordoñez, C.; García-Cortés, S.; Martínez, J.

    2014-01-01

    An algorithm for automatic extraction of pole-like street furniture objects using Mobile Laser Scanner data was developed and tested. The method consists in an initial simplification of the point cloud based on the regular voxelization of the space. The original point cloud is spatially discretized and a version of the point cloud whose amount of data represents 20-30% of the total is created. All the processes are carried out with the reduced version of the data, but the original point cloud is always accessible without any information loss, as each point is linked to its voxel. All the horizontal sections of the voxelized point cloud are analyzed and segmented separately. The two-dimensional fragments compatible with a section of a target pole are selected and grouped. Finally, the three-dimensional voxel representation of the detected pole-like objects is identified and the points from the original point cloud belonging to each pole-like object are extracted. The algorithm can be used with data from any Mobile Laser Scanning system, as it transforms the original point cloud and fits it into a regular grid, thus avoiding irregularities produced due to point density differences within the point cloud. The algorithm was tested in four test sites with different slopes and street shapes and features. All the target pole-like objects were detected, with the only exception of those severely occluded by large objects and some others which were either attached or too close to certain features.

  17. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    SciTech Connect

    Poynee, L A

    2003-05-06

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation.

  18. GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie

    2016-04-01

    Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.

  19. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  20. Sunspots and Coronal Bright Points Tracking using a Hybrid Algorithm of PSO and Active Contour Model

    NASA Astrophysics Data System (ADS)

    Dorotovic, I.; Shahamatnia, E.; Lorenc, M.; Rybansky, M.; Ribeiro, R. A.; Fonseca, J. M.

    2014-02-01

    In the last decades there has been a steady increase of high-resolution data, from ground-based and space-borne solar instruments, and also of solar data volume. These huge image archives require efficient automatic image processing software tools capable of detecting and tracking various features in the solar atmosphere. Results of application of such tools are essential for studies of solar activity evolution, climate change understanding and space weather prediction. The follow up of interplanetary and near-Earth phenomena requires, among others, automatic tracking algorithms that can determine where a feature is located, on successive images taken along the period of observation. Full-disc solar images, obtained both with the ground-based solar telescopes and the instruments onboard the satellites, provide essential observational material for solar physicists and space weather researchers for better understanding the Sun, studying the evolution of various features in the solar atmosphere, and also investigating solar differential rotation by tracking such features along time. Here we demonstrate and discuss the suitability of applying a hybrid Particle Swarm Optimization (PSO) algorithm and Active Contour model for tracking and determining the differential rotation of sunspots and coronal bright points (CBPs) on a set of selected solar images. The results obtained confirm that the proposed approach constitutes a promising tool for investigating the evolution of solar activity and also for automating tracking features on massive solar image archives.

  1. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    NASA Astrophysics Data System (ADS)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  2. Verification of dynamic initial pointing algorithm on two-dimensional rotating platform based on GPS/INS

    NASA Astrophysics Data System (ADS)

    Yang, Baohua; Wang, Juanjuan; Wang, Jian

    2015-10-01

    In order to achieve rapid establishment of long-distance laser communication links, it is an effective program to adopt a GPS / INS integrated navigation system (GINS) for completing the initial pointing of the dynamic laser communication. Firstly, we present a dynamic initial pointing algorithm (DIPA), which can be applied to get the pointing angle (PA) by calculating the real-time data received from GINS. Next, the feasibility of the pointing system is analyzed and the hardware system as well as PC software is designed. Then, experiments in the outdoor are carried out to prove the DIPA. Finally, the correctness and reliability of the pointing system is analyzed.

  3. Sensitivity of passive microwave sea ice concentration algorithms to the selection of locally and seasonally adjusted tie points

    NASA Technical Reports Server (NTRS)

    Steffen, Konrad; Schweiger, Axel

    1989-01-01

    The sensitivity of passive microwave sea-ice concentration (SIC) algorithms to the selection of tie points was analyzed. SICs were derived with the NASA Team ice algorithm for global tie points and for locally and seasonally adjusted tie points. The SSM/I SIC was then compared to Landsat-MSS-derived SICs. Preliminary results show a mean difference of SSM/I- and Landsat-derived SICs for 50 x 50 km grid cells of 2.7 percent along the ice edge of the Beaufort Sea during fall with local tie points. The accuracy decreased to 9.7 percent when global tie points were used. During freeze-up in the Beaufort Sea, with grey ice and nilas as dominant ice cover, the mean difference was 4.3 percent for local tie points and 13.9 percent for global tie points. For the spring ice cover in the Bering Sea a mean difference of 4.4 percent for local tie points and 15.7 percent for global tie points was found. This large difference reveals some limitations of the NASA-Team algorithm under freeze-up and spring conditions (thin ice areas).

  4. Joint inversion of T1-T2 spectrum combining the iterative truncated singular value decomposition and the parallel particle swarm optimization algorithms

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Wang, Hua; Fan, Yiren; Cao, Yingchang; Chen, Hua; Huang, Rui

    2016-01-01

    With more information than the conventional one dimensional (1D) longitudinal relaxation time (T1) and transversal relaxation time (T2) spectrums, a two dimensional (2D) T1-T2 spectrum in a low field nuclear magnetic resonance (NMR) is developed to discriminate the relaxation components of fluids such as water, oil and gas in porous rock. However, the accuracy and efficiency of the T1-T2 spectrum are limited by the existing inversion algorithms and data acquisition schemes. We introduce a joint method to inverse the T1-T2 spectrum, which combines iterative truncated singular value decomposition (TSVD) and a parallel particle swarm optimization (PSO) algorithm to get fast computational speed and stable solutions. We reorganize the first kind Fredholm integral equation of two kernels to a nonlinear optimization problem with non-negative constraints, and then solve the ill-conditioned problem by the iterative TSVD. Truncating positions of the two diagonal matrices are obtained by the Akaike information criterion (AIC). With the initial values obtained by TSVD, we use a PSO with parallel structure to get the global optimal solutions with a high computational speed. We use the synthetic data with different signal to noise ratio (SNR) to test the performance of the proposed method. The result shows that the new inversion algorithm can achieve favorable solutions for signals with SNR larger than 10, and the inversion precision increases with the decrease of the components of the porous rock.

  5. Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data

    NASA Astrophysics Data System (ADS)

    Alcolea, Andres; Renard, Philippe

    2010-08-01

    Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations

  6. [Simultaneous resolution and determination of tyrosine, tryptophan and phenylalanine by alternating penalty trilinear decomposition algorithm coupled with 3D emission-excitation matrix fluorometry].

    PubMed

    Xiao, Jin; Ren, Feng-lian; Song, Ge; Liao, Lü; Yu, Wen-feng; Zeng, Tao

    2007-10-01

    A new method using alternating penalty trilinear decomposition algorithm coupled with excitation-emission matrix fluorometry has been developed for simultaneous resolution and determination of tyrosine, phenylalanine and tryptophan. Their correlation coefficients were 0.9987, 0.9995 and 0.9993 respectively. The contents of tyrosine, phenylalanine and tryptophan in Hibiscus syriacus L. leaves were also be determined by this method after being extracted by ultrasonic. The coefficients of variation and the recoveries of the three amino acids were 0.84%, 0.36%, 1.59% and 101.0%-92.7%, 106.5%-93.0%, 103.0%-95.0% respectively. All these show that this is a simple, fast and cridible method. PMID:18306802

  7. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  8. Decomposition of MATLAB script for FPGA implementation of real time simulation algorithms for LLRF system in European XFEL

    NASA Astrophysics Data System (ADS)

    Bujnowski, K.; Pucyk, P.; Pozniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The European XFEL project uses the LLRF system for stabilization of a vector sum of the RF field in 32 superconducting cavities. A dedicated, high performance photonics and electronics and software was built. To provide high system availability an appropriate test environment as well as diagnostics was designed. A real time simulation subsystem was designed which is based on dedicated electronics using FPGA technology and robust simulation models implemented in VHDL. The paper presents an architecture of the system framework which allows for easy and flexible conversion of MATLAB language structures directly into FPGA implementable grid of parameterized and simple DSP processors. The decomposition of MATLAB grammar was described as well as optimization process and FPGA implementation issues.

  9. Parallel feedback active noise control of MRI acoustic noise with signal decomposition using hybrid RLS-NLMS adaptive algorithms.

    PubMed

    Ganguly, Anshuman; Krishna Vemuri, Sri Hari; Panahi, Issa

    2014-01-01

    This paper presents a cost-effective adaptive feedback Active Noise Control (FANC) method for controlling functional Magnetic Resonance Imaging (fMRI) acoustic noise by decomposing it into dominant periodic components and residual random components. Periodicity of fMRI acoustic noise is exploited by using linear prediction (LP) filtering to achieve signal decomposition. A hybrid combination of adaptive filters-Recursive Least Squares (RLS) and Normalized Least Mean Squares (NLMS) are then used to effectively control each component separately. Performance of the proposed FANC system is analyzed and Noise attenuation levels (NAL) up to 32.27 dB obtained by simulation are presented which confirm the effectiveness of the proposed FANC method. PMID:25570676

  10. Melting point prediction employing k-nearest neighbor algorithms and genetic parameter optimization.

    PubMed

    Nigsch, Florian; Bender, Andreas; van Buuren, Bernd; Tissen, Jos; Nigsch, Eduard; Mitchell, John B O

    2006-01-01

    We have applied the k-nearest neighbor (kNN) modeling technique to the prediction of melting points. A data set of 4119 diverse organic molecules (data set 1) and an additional set of 277 drugs (data set 2) were used to compare performance in different regions of chemical space, and we investigated the influence of the number of nearest neighbors using different types of molecular descriptors. To compute the prediction on the basis of the melting temperatures of the nearest neighbors, we used four different methods (arithmetic and geometric average, inverse distance weighting, and exponential weighting), of which the exponential weighting scheme yielded the best results. We assessed our model via a 25-fold Monte Carlo cross-validation (with approximately 30% of the total data as a test set) and optimized it using a genetic algorithm. Predictions for drugs based on drugs (separate training and test sets each taken from data set 2) were found to be considerably better [root-mean-squared error (RMSE)=46.3 degrees C, r2=0.30] than those based on nondrugs (prediction of data set 2 based on the training set from data set 1, RMSE=50.3 degrees C, r2=0.20). The optimized model yields an average RMSE as low as 46.2 degrees C (r2=0.49) for data set 1, and an average RMSE of 42.2 degrees C (r2=0.42) for data set 2. It is shown that the kNN method inherently introduces a systematic error in melting point prediction. Much of the remaining error can be attributed to the lack of information about interactions in the liquid state, which are not well-captured by molecular descriptors. PMID:17125183

  11. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    presentehat shows the readback delay does not have a negative impact on gimbal control. The decision was made to consider implementing two of the jitter mitigation techniques on board the spacecraft: stagger stepping and the NSR. Flight data from two sets of handovers, one set without jitter mitigation and the other with mitigation enabled, were examined. The trajectory of the predicted handover was compared with the measured trajectory for the two cases, showing that tracking was not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. In this paper, the flight results are examined from a test where the HGAs are following the path of a nominal handover with stagger stepping on and HMI NSRs enabled. In this case, the reaction wheels are moving at low speed and the instruments are taking pictures in their standard sequence. The flight data shows the level of jitter that the instruments see when their shutters are open. The HGA-induced jitter is well within the jitter requirement when the stagger step and NSR mitigation options are enabled. The SDO HGA pointing algorithm was designed to achieve nominal antenna pointing at the ground station, perform slews during handover season, and provide three HGA-induced jitter mitigation options without compromising pointing objectives. During the commissioning phase, flight data sets were collected to verify the HGA pointing algorithm and demonstrate its jitter mitigation capabilities.

  12. Nested Taylor decomposition in multivariate function decomposition

    NASA Astrophysics Data System (ADS)

    Baykara, N. A.; Gürvit, Ercan

    2014-12-01

    Fluctuationlessness approximation applied to the remainder term of a Taylor decomposition expressed in integral form is already used in many articles. Some forms of multi-point Taylor expansion also are considered in some articles. This work is somehow a combination these where the Taylor decomposition of a function is taken where the remainder is expressed in integral form. Then the integrand is decomposed to Taylor again, not necessarily around the same point as the first decomposition and a second remainder is obtained. After taking into consideration the necessary change of variables and converting the integration limits to the universal [0;1] interval a multiple integration system formed by a multivariate function is formed. Then it is intended to apply the Fluctuationlessness approximation to each of these integrals one by one and get better results as compared with the single node Taylor decomposition on which the Fluctuationlessness is applied.

  13. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  14. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  15. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis

    NASA Astrophysics Data System (ADS)

    Portes, Leonardo L.; Aguirre, Luis A.

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011), 10.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA.

  16. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis.

    PubMed

    Portes, Leonardo L; Aguirre, Luis A

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011)PLEEE81539-375510.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA. PMID:27300889

  17. Building optimal regression tree by ant colony system-genetic algorithm: application to modeling of melting points.

    PubMed

    Hemmateenejad, Bahram; Shamsipur, Mojtaba; Zare-Shahabadi, Vali; Akhond, Morteza

    2011-10-17

    The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure. PMID:21907021

  18. a New Control Points Based Geometric Correction Algorithm for Airborne Push Broom Scanner Images Without On-Board Data

    NASA Astrophysics Data System (ADS)

    Strakhov, P.; Badasen, E.; Shurygin, B.; Kondranin, T.

    2016-06-01

    Push broom scanners, such as video spectrometers (also called hyperspectral sensors), are widely used in the present. Usage of scanned images requires accurate geometric correction, which becomes complicated when imaging platform is airborne. This work contains detailed description of a new algorithm developed for processing of such images. The algorithm requires only user provided control points and is able to correct distortions caused by yaw, flight speed and height changes. It was tested on two series of airborne images and yielded RMS error values on the order of 7 meters (3-6 source image pixels) as compared to 13 meters for polynomial-based correction.

  19. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  20. The collapsed cone algorithm for 192Ir dosimetry using phantom-size adaptive multiple-scatter point kernels

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-01

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  1. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    PubMed

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-01

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  2. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    NASA Astrophysics Data System (ADS)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  3. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    SciTech Connect

    Atanassov, E.; Dimitrov, D. E-mail: emanouil@parallel.bas.bg Gurov, T.

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  4. A uniform energy consumption algorithm for wireless sensor and actuator networks based on dynamic polling point selection.

    PubMed

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2013-01-01

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation. PMID:24451455

  5. Locating critical points on multi-dimensional surfaces by genetic algorithm: test cases including normal and perturbed argon clusters

    NASA Astrophysics Data System (ADS)

    Chaudhury, Pinaki; Bhattacharyya, S. P.

    1999-03-01

    It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.

  6. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  7. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  8. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  9. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  10. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  11. An Algorithm for Correcting CTE Loss in Spectrophotometry of Point Sources with the STIS CCD

    NASA Astrophysics Data System (ADS)

    Bohlin, Ralph; Goudfrooij, Paul

    2003-08-01

    The correction for the change in sensitivity with time for the STIS CCD modes is complicated by the gradual loss of charge transfer efficiency (CTE) of the CCD. The amount of this CTE loss depends on time in orbit, the location on the CCD chip with respect to the readout amplifier, the stellar signal strength, and the background level. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (tungsten lamp images taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual stellar spectra in the first order CCD modes. The main complication is the quantification of the roll-off of the CTE losses for weak stellar signals on non-zero backgrounds. This roll-off term is determined by relatively short exposures of primary standard stars along with the G750L series of properly exposed AGK+81D266 monitoring data, where the observed changes in response over time are primarily CTE losses and not sensitivity degradations. After accounting for CTE losses and after an iterative determination of the optical system throughput losses, the CTE correction algorithm is verified by comparing G230L MAMA fluxes of faint standard stars with G430L fluxes in the overlap region around 3000Å. For spectra at the standard reference position at the CCD center, CTE losses as big as 20% are corrected to within 1% at high signal levels and with a precision of ~2% at ~100 electrons after application of the algorithm presented here.

  12. An Evaluation of Vegetation Filtering Algorithms for Improved Snow Depth Estimation from Point Cloud Observations in Mountain Environments

    NASA Astrophysics Data System (ADS)

    Vanderjagt, B. J.; Durand, M. T.; Lucieer, A.; Wallace, L.

    2014-12-01

    High-resolution snow depth measurements are possible through bare-earth (BE) differencing of point cloud datasets obtained using LiDAR and photogrammetry during snow-free and snow-covered conditions. These accuracy and resolution of these snow depth measurements are desirable in mountain environments in which ground measurements are dangerous and difficult to perform, and other remote sensing techniques are often characterized by large errors and uncertainties due variable topography, vegetation, and snow properties. BE ground filtering algorithms make different assumptions about ground characteristics to differentiate between ground and non-ground features. Because of this, ground surfaces may have unique characteristics that confound ground filters depending on the location and terrain conditions. These include low-lying shrubs (<1 m), areas with high topographic relief, and areas with high surface roughness. We evaluate several different algorithms, including lowest point, kriging, and more sophisticated splining techniques such as the Multiscale Curvature Classification (MCC) to resolve snow depths. Understanding how these factors affect BE surface models and thus snow depth measurements is a valuable contribution towards improving the processing protocols associated with these relatively new snow observation techniques. We test the different BE filtering algorithms using LiDAR and photogrammetric measurements taken from an Unmanned Aerial Vehicle (UAV) in Southwest Tasmania, Australia during the winter and spring of 2013. The study area is characterized by sloping, uneven terrain, and different types of vegetation including eucalyptus and conifer trees, as well as dense shrubs varying in heights from 0.3-1.5 meters. Initial snow depth measurements using the unfiltered point cloud measurements are characterized by large errors (~20-90 cm) due to the dense vegetation. Using filtering techniques instead of raw differencing improves the estimation of snow depth in

  13. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    SciTech Connect

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill; Chand, Kyle

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.

  14. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    NASA Astrophysics Data System (ADS)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  15. An Automatic Algorithm for Minimizing Anomalies and Discrepancies in Point Clouds Acquired by Laser Scanning Technique

    NASA Astrophysics Data System (ADS)

    Bordin, Fabiane; Gonzaga, Luiz, Jr.; Galhardo Muller, Fabricio; Veronez, Mauricio Roberto; Scaioni, Marco

    2016-06-01

    Laser scanning technique from airborne and land platforms has been largely used for collecting 3D data in large volumes in the field of geosciences. Furthermore, the laser pulse intensity has been widely exploited to analyze and classify rocks and biomass, and for carbon storage estimation. In general, a laser beam is emitted, collides with targets and only a percentage of emitted beam returns according to intrinsic properties of each target. Also, due interferences and partial collisions, the laser return intensity can be incorrect, introducing serious errors in classification and/or estimation processes. To address this problem and avoid misclassification and estimation errors, we have proposed a new algorithm to correct return intensity for laser scanning sensors. Different case studies have been used to evaluate and validated proposed approach.

  16. A Unique Computational Algorithm to Simulate Probabilistic Multi-Factor Interaction Model Complex Material Point Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2010-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points--the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  17. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGESBeta

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  18. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    NASA Astrophysics Data System (ADS)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-01

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.

  19. ParaStream: A parallel streaming Delaunay triangulation algorithm for LiDAR points on multicore architectures

    NASA Astrophysics Data System (ADS)

    Wu, Huayi; Guan, Xuefeng; Gong, Jianya

    2011-09-01

    This paper presents a robust parallel Delaunay triangulation algorithm called ParaStream for processing billions of points from nonoverlapped block LiDAR files. The algorithm targets ubiquitous multicore architectures. ParaStream integrates streaming computation with a traditional divide-and-conquer scheme, in which additional erase steps are implemented to reduce the runtime memory footprint. Furthermore, a kd-tree-based dynamic schedule strategy is also proposed to distribute triangulation and merging work onto the processor cores for improved load balance. ParaStream exploits most of the computing power of multicore platforms through parallel computing, demonstrating qualities of high data throughput as well as a low memory footprint. Experiments on a 2-Way-Quad-Core Intel Xeon platform show that ParaStream can triangulate approximately one billion LiDAR points (16.4 GB) in about 16 min with only 600 MB physical memory. The total speedup (including I/O time) is about 6.62 with 8 concurrent threads.

  20. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  1. Industrial experience of process identification and set-point decision algorithm in a full-scale treatment plant.

    PubMed

    Yoo, Changkyoo; Kim, Min Han

    2009-06-01

    This paper presents industrial experience of process identification, monitoring, and control in a full-scale wastewater treatment plant. The objectives of this study were (1) to apply and compare different process-identification methods of proportional-integral-derivative (PID) autotuning for stable dissolved oxygen (DO) control, (2) to implement a process monitoring method that estimates the respiration rate simultaneously during the process-identification step, and (3) to propose a simple set-point decision algorithm for determining the appropriate set point of the DO controller for optimal operation of the aeration basin. The proposed method was evaluated in the industrial wastewater treatment facility of an iron- and steel-making plant. Among the process-identification methods, the control signal of the controller's set-point change was best for identifying low-frequency information and enhancing the robustness to low-frequency disturbances. Combined automatic control and set-point decision method reduced the total electricity consumption by 5% and the electricity cost by 15% compared to the fixed gain PID controller, when considering only the surface aerators. Moreover, as a result of improved control performance, the fluctuation of effluent quality decreased and overall effluent water quality was better. PMID:19428173

  2. Therapy Algorithm for Portal Vein Thrombosis in Liver Cirrhosis: The Internist's Point of View

    PubMed Central

    Rössle, Martin; Bausch, Birke; Klinger, Christoph

    2014-01-01

    Background Treatment of non-malignant portal vein thrombosis (PVT) in patients with cirrhosis has been neglected in the past because of the fear of bleeding complications when using anticoagulation and due to the technical difficulties associated with the implantation of the transjugular intrahepatic portosystemic shunt (TIPS). However, PVT has a negative impact on outcome and compromises liver transplantation, warranting treatment by using anticoagulation and TIPS. Methods This review considers studies on the treatment of PVT in cirrhosis published in the last 10 years. Unfortunately, many of these studies are limited by their retrospective design and a small sample size. Results Anticoagulation using low-molecular-weight heparin (LMWH) or vitamin K antagonists is effective in the treatment of patients with limited and recent PVT, resulting in a recanalization in up to 50% of the patients. TIPS (plus local measures) results in a recanalization of up to 100% and reduces the rebleeding rate considerably in patients with recent or chronic PVT. Conclusion Based on the presently limited knowledge, a therapy algorithm is suggested favouring the TIPS as a first-line treatment for PVT in patients with symptomatic portal hypertension. Patients with thus far asymptomatic portal hypertension may first receive anticoagulation, preferably using LMWH. If these patients have a condition where anticoagulation is not promising (complete, extended, chronic PVT) or ineffective, or if they are candidates for liver transplantation, the TIPS may be implanted without delay. PMID:26288607

  3. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  4. A novel multi-aperture based sun sensor based on a fast multi-point MEANSHIFT (FMMS) algorithm.

    PubMed

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  5. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  6. Decomposing Nekrasov decomposition

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Zenkevich, Y.

    2016-02-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair "interaction" is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  7. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data

    PubMed Central

    Banda, Jorge A.; Haydel, K. Farish; Davila, Tania; Desai, Manisha; Haskell, William L.; Matheson, Donna; Robinson, Thomas N.

    2016-01-01

    Objective To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). Methods 268 7–11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4–7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. Results WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001), but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001). Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001). Conclusions The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy. PMID:26938240

  8. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  9. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  10. [Comparative Study on the Three Algorithms of T-wave End Detection: Wavelet Method, Cumulative Points Area Method and Trapezium Area Method].

    PubMed

    Li, Chengtao; Zhang, Yongliang; He, Zijun; Ye, Jun; Hu, Fusong; Ma, Zuchang; Wang, Jingzhi

    2015-12-01

    In order to find the most suitable algorithm of T-wave end point detection for clinical detection, we tested three methods, which are not just dependent on the threshold value of T-wave end point detection, i. e. wavelet method, cumulative point area method and trapezium area method, in PhysioNet QT database (20 records with 3 569 beats each). We analyzed and compared their detection performance. First, we used the wavelet method to locate the QRS complex and T-wave. Then we divided the T-wave into four morphologies, and we used the three algorithms mentioned above to detect T-wave end point. Finally, we proposed an adaptive selection T-wave end point detection algorithm based on T-wave morphology and tested it with experiments. The results showed that this adaptive selection method had better detection performance than that of the single T-wave end point detection algorithm. The sensitivity, positive predictive value and the average time errors were 98.93%, 99.11% and (--2.33 ± 19.70) ms, respectively. Consequently, it can be concluded that the adaptive selection algorithm based on T-wave morphology improves the efficiency of T-wave end point detection. PMID:27079084

  11. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO), launched in 2010, is a NASA-designed spacecraft built to study the Sun. SDO has tight pointing requirements and instruments that are sensitive to spacecraft jitter. Two High Gain Antennas (HGAs) are used to continuously send science data to a dedicated ground station. Preflight analysis showed that jitter resulting from motion of the HGAs was a cause for concern. Three jitter mitigation techniques were developed and implemented to overcome effects of jitter from different sources. These mitigation techniques include: the random step delay, stagger stepping, and the No Step Request (NSR). During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft, in which various sources of jitter were examined to determine their level of effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. The jitter levels were compared with the gimbal jitter allocations for each instrument. The decision was made to consider implementing two of the jitter mitigating techniques on board the spacecraft: stagger stepping and the NSR. Flight data with and without jitter mitigation enabled was examined, and it is shown in this paper that HGA tracking is not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. The HGA-induced jitter on the instruments is well within the jitter requirement when the stagger step and NSR mitigation options are enabled.

  12. DHARMA - Discriminant hyperplane abstracting residuals minimization algorithm for separating clusters with fuzzy boundaries. [data points pattern recognition technique

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    Learning of discriminant hyperplanes in imperfectly supervised or unsupervised training sample sets with unreliably labeled samples along the fuzzy joint boundaries between sample clusters is discussed, with the discriminant hyperplane designed to be a least-squares fit to the unreliably labeled data points. (Samples along the fuzzy boundary jump back and forth from one cluster to the other in recursive cluster stabilization and are considered unreliably labeled.) Minimization of the distances of these unreliably labeled samples from the hyperplanes does not sacrifice the ability to discriminate between classes represented by reliably labeled subsets of samples. An equivalent unconstrained linear inequality problem is formulated and algorithms for its solution are indicated. Landsat earth sensing data were used in confirming the validity and computational feasibility of the approach, which should be useful in deriving discriminant hyperplanes separating clusters with fuzzy boundaries, given supervised training sample sets with unreliably labeled boundary samples.

  13. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  14. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p<0.01); and the correlation coefficient of tree crown volume between V(VC) derived from new method and V(C) by the formula of a regular body is 0.960 (p<0.001). The results also show that the average of V(C) is smaller than that of V(VC) at the rate of 8.03%, and the average of A4 is larger than that of A(V) at the rate of 25.5%. Assumed Av and V(VC) as ture values, the deviations of the new method could be attributed to irregularity of the crowns' silhouettes. Different morphological characteristics of tree crown led to measurement error in forest simple plot survey. Based on the results, the paper proposes that: (1) the use of eight-point or sixteen-point projection with

  15. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  16. CD4 Count Outperforms World Health Organization Clinical Algorithm for Point-of Care HIV Diagnosis among Hospitalized HIV-exposed Malawian Infants

    PubMed Central

    Maliwichi, Madalitso; Rosenberg, Nora E.; Macfie, Rebekah; Olson, Dan; Hoffman, Irving; van der Horst, Charles M.; Kazembe, Peter N.; Hosseinipour, Mina C.; McCollum, Eric D.

    2014-01-01

    Objective To determine, for the WHO algorithm for point-of-care diagnosis of HIV infection, the agreement levels between pediatricians and non-physician clinicians, and to compare sensitivity and specificity profiles of the WHO algorithm and different CD4 thresholds against HIV PCR testing in hospitalized Malawian infants. Methods In 2011, hospitalized HIV-exposed infants <12 months in Lilongwe, Malawi were evaluated independently with the WHO algorithm by both a pediatrician and clinical officer. Blood was collected for CD4 and molecular HIV testing (DNA or RNA PCR). Using molecular testing as the reference, sensitivity, specificity, and positive predictive value (PPV) were determined for the WHO algorithm and CD4 count thresholds of 1500 and 2000 cells/mm3 by pediatricians and clinical officers. Results We enrolled 166 infants (50% female, 34% <2 months, 37% HIV-infected). Sensitivity was higher using CD4 thresholds (<1500, 80%; <2000, 95%) than with the algorithm (physicians, 57%; clinical officers, 71%). Specificity was comparable for CD4 thresholds (<1500, 68%, <2000, 50%) and the algorithm (pediatricians, 55%, clinical officers, 50%). The positive predictive values were slightly better using CD4 thresholds (<1500, 59%, <2000, 52%) than the algorithm (pediatricians, 43%, clinical officers 45%) at this prevalence. Conclusion Performance by the WHO algorithm and CD4 thresholds resulted in many misclassifications. Point-of-care CD4 thresholds of <1500 cells/mm3 or <2000 cells/mm3 could identify more HIV-infected infants with fewer false positives than the algorithm. However, a point-of-care option with better performance characteristics is needed for accurate, timely HIV diagnosis. PMID:24754543

  17. Free Shape Context Descriptors Optimized with Genetic Algorithm for the Detection of Dead Tree Trunks in ALS Point Clouds

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Yao, W.; Heurich, M.; Krzystek, P.; Stilla, U.

    2015-08-01

    In this paper, a new family of shape descriptors called Free Shape Contexts (FSC) is introduced to generalize the existing 3D Shape Contexts. The FSC introduces more degrees of freedom than its predecessor by allowing the level of complexity to vary between its parts. Also, each part of the FSC has an associated activity state which controls whether the part can contribute a feature value. We describe a method of evolving the FSC parameters for the purpose of creating highly discriminative features suitable for detecting specific objects in sparse point clouds. The evolutionary process is built on a genetic algorithm (GA) which optimizes the parameters with respect to cross-validated overall classification accuracy. The GA manipulates both the structure of the FSC and the activity flags, allowing it to perform an implicit feature selection alongside the structure optimization by turning off segments which do not augment the discriminative capabilities. We apply the proposed descriptor to the problem of detecting single standing dead tree trunks from ALS point clouds. The experiment, carried out on a set of 285 objects, reveals that an FSC optimized through a GA with manually tuned recombination parameters is able to attain a classification accuracy of 84.2%, yielding an increase of 4.2 pp compared to features derived from eigenvalues of the 3D covariance matrix. Also, we address the issue of automatically tuning the GA recombination metaparameters. For this purpose, a fuzzy logic controller (FLC) which dynamically adjusts the magnitude of the recombination effects is co-evolved with the FSC parameters in a two-tier evolution scheme. We find that it is possible to obtain an FLC which retains the classification accuracy of the manually tuned variant, thereby limiting the need for guessing the appropriate meta-parameter values.

  18. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    Energy Science and Technology Software Center (ESTSC)

    2012-05-31

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  19. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-01

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. PMID:25345784

  20. Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems

    SciTech Connect

    O'Leary, Dianne P.; Tits, Andre

    2014-04-03

    Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.

  1. Bridging Proper Orthogonal Decomposition methods and augmented Newton-Krylov algorithms: an adaptive model order reduction for highly nonlinear mechanical problems

    PubMed Central

    Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.

    2013-01-01

    This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688

  2. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration

    PubMed Central

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods. PMID:26881433

  3. Fixed-point single-precision estimation. [Kalman filtering for NASA Standard Spacecraft Computer orbit determination algorithm

    NASA Technical Reports Server (NTRS)

    Thompson, E. H.; Farrell, J. L.

    1976-01-01

    Monte Carlo simulation of autonomous orbit determination has validated the use of an 18-bit NASA Standard Spacecraft Computer (NSSC) for the extended Kalman filter. Dimensionally consistent scales are chosen for all variables in the algorithm, such that nearly all of the onboard computation can be performed in single precision without matrix square root formulations. Allowable simplifications in algorithm implementation and practical means of ensuring convergence are verified for accuracies of a few km provided by star/vertical observations

  4. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore

  5. A double-loop structure in the adaptive generalized predictive control algorithm for control of robot end-point contact force.

    PubMed

    Wen, Shuhuan; Zhu, Jinghai; Li, Xiaoli; Chen, Shengyong

    2014-09-01

    Robot force control is an essential issue in robotic intelligence. There is much high uncertainty when robot end-effector contacts with the environment. Because of the environment stiffness effects on the system of the robot end-effector contact with environment, the adaptive generalized predictive control algorithm based on quantitative feedback theory is designed for robot end-point contact force system. The controller of the internal loop is designed on the foundation of QFT to control the uncertainty of the system. An adaptive GPC algorithm is used to design external loop controller to improve the performance and the robustness of the system. Two closed loops used in the design approach realize the system׳s performance and improve the robustness. The simulation results show that the algorithm of the robot end-effector contacting force control system is effective. PMID:24973336

  6. TRIANGLE-SHAPED DC CORONA DISCHARGE DEVICE FOR MOLECULAR DECOMPOSITION

    EPA Science Inventory

    The paper discusses the evaluation of electrostatic DC corona discharge devices for the application of molecular decomposition. A point-to-plane geometry corona device with a rectangular cross section demonstrated low decomposition efficiencies in earlier experimental work. The n...

  7. Study of the decomposition of SF6 under dc negative polarity corona discharges (point-to-plane geometry): Influence of the metal constituting the plane electrode

    NASA Astrophysics Data System (ADS)

    Casanovas, A. M.; Casanovas, J.; Lagarde, F.; Belarbi, A.

    1992-10-01

    SF6 samples (PSF6=100 or 200 kPa) were submitted to point-to-plane dc negative polarity corona discharges in the presence of water [concentration=2000 ppmv (parts per million by volume)] or without the addition of water. The stable gaseous byproducts formed, (SO2F2, SOF2, and S2F10) were assayed by gas-phase chromatography. The variation of their yields against the charge transported (up to 10 C) was studied for two metals (aluminum and stainless steel) constituting the plane electrode, at various values of the SF6 pressure, the water content, the gap spacing (2.5 and 8 mm), and the discharge current [12≤Ī (μA)≤25]. The results indicate an important effect of the metal constituting the plane electrode and of the moisture conditions, particularly on the production of SOF2 and S2F10.

  8. Convergence Analysis of a Domain Decomposition Paradigm

    SciTech Connect

    Bank, R E; Vassilevski, P S

    2006-06-12

    We describe a domain decomposition algorithm for use in several variants of the parallel adaptive meshing paradigm of Bank and Holst. This algorithm has low communication, makes extensive use of existing sequential solvers, and exploits in several important ways data generated as part of the adaptive meshing paradigm. We show that for an idealized version of the algorithm, the rate of convergence is independent of both the global problem size N and the number of subdomains p used in the domain decomposition partition. Numerical examples illustrate the effectiveness of the procedure.

  9. Critical analysis of nitramine decomposition data: Activation energies and frequency factors for HMX and RDX decomposition

    NASA Technical Reports Server (NTRS)

    Schroeder, M. A.

    1980-01-01

    A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.

  10. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

    NASA Astrophysics Data System (ADS)

    Siegel, J.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!

  11. Voxelization algorithms for geospatial applications: Computational methods for voxelating spatial datasets of 3D city models containing 3D surface, curve and point data models.

    PubMed

    Nourian, Pirouz; Gonçalves, Romulo; Zlatanova, Sisi; Ohori, Ken Arroyo; Vu Vo, Anh

    2016-01-01

    Voxel representations have been used for years in scientific computation and medical imaging. The main focus of our research is to provide easy access to methods for making large-scale voxel models of built environment for environmental modelling studies while ensuring they are spatially correct, meaning they correctly represent topological and semantic relations among objects. In this article, we present algorithms that generate voxels (volumetric pixels) out of point cloud, curve, or surface objects. The algorithms for voxelization of surfaces and curves are a customization of the topological voxelization approach [1]; we additionally provide an extension of this method for voxelization of point clouds. The developed software has the following advantages:•It provides easy management of connectivity levels in the resulting voxels.•It is not dependant on any external library except for primitive types and constructs; therefore, it is easy to integrate them in any application.•One of the algorithms is implemented in C++ and C for platform independence and efficiency. PMID:27408832

  12. An algorithm for approximating the L * invariant coordinate from the real-time tracing of one magnetic field line between mirror points

    NASA Astrophysics Data System (ADS)

    Lejosne, Solène

    2014-08-01

    The L * invariant coordinate depends on the global electromagnetic field topology at a given instance, and the standard method for its determination requires a computationally expensive drift contour tracing. This fact makes L * a cumbersome parameter to handle. In this paper, we provide new insights on the L * parameter, and we introduce an algorithm for an L * approximation that only requires the real-time tracing of one magnetic field line between mirrors points. This approximation is based on the description of the variation of the magnetic field mirror intensity after an adiabatic dipolarization, i.e., after the nondipolar components of a magnetic field have been turned off with a characteristic time very long in comparison with the particles' drift periods. The corresponding magnetic field topological variations are deduced, assuming that the field line foot points remain rooted in the Earth's surface, and the drift average operator is replaced with a computationally cheaper circular average operator. The algorithm results in a relative difference of a maximum of 12% between the approximate L * and the output obtained using the International Radiation Belt Environment Modeling library, in the case of the Tsyganenko 89 model for the external magnetic field (T89). This margin of error is similar to the margin of error due to small deviations between different magnetic field models at geostationary orbit. This approximate L * algorithm represents therefore a reasonable compromise between computational speed and accuracy of particular interest for real-time space weather forecast purposes.

  13. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  14. Adaptive neuro-fuzzy inference system multi-objective optimization using the genetic algorithm/singular value decomposition method for modelling the discharge coefficient in rectangular sharp-crested side weirs

    NASA Astrophysics Data System (ADS)

    Khoshbin, Fatemeh; Bonakdari, Hossein; Hamed Ashraf Talesh, Seyed; Ebtehaj, Isa; Zaji, Amir Hossein; Azimi, Hamed

    2016-06-01

    In the present article, the adaptive neuro-fuzzy inference system (ANFIS) is employed to model the discharge coefficient in rectangular sharp-crested side weirs. The genetic algorithm (GA) is used for the optimum selection of membership functions, while the singular value decomposition (SVD) method helps in computing the linear parameters of the ANFIS results section (GA/SVD-ANFIS). The effect of each dimensionless parameter on discharge coefficient prediction is examined in five different models to conduct sensitivity analysis by applying the above-mentioned dimensionless parameters. Two different sets of experimental data are utilized to examine the models and obtain the best model. The study results indicate that the model designed through GA/SVD-ANFIS predicts the discharge coefficient with a good level of accuracy (mean absolute percentage error = 3.362 and root mean square error = 0.027). Moreover, comparing this method with existing equations and the multi-layer perceptron-artificial neural network (MLP-ANN) indicates that the GA/SVD-ANFIS method has superior performance in simulating the discharge coefficient of side weirs.

  15. Quantitative analysis of triazine herbicides in environmental samples by using high performance liquid chromatography and diode array detection combined with second-order calibration based on an alternating penalty trilinear decomposition algorithm.

    PubMed

    Li, Yuan-Na; Wu, Hai-Long; Qing, Xiang-Dong; Li, Quan; Li, Shu-Fang; Fu, Hai-Yan; Yu, Yong-Jie; Yu, Ru-Qin

    2010-09-23

    A novel application of second-order calibration method based on an alternating penalty trilinear decomposition (APTLD) algorithm is presented to treat the data from high performance liquid chromatography-diode array detection (HPLC-DAD). The method makes it possible to accurately and reliably analyze atrazine (ATR), ametryn (AME) and prometryne (PRO) contents in soil, river sediment and wastewater samples. Satisfactory results are obtained although the elution and spectral profiles of the analytes are heavily overlapped with the background in environmental samples. The obtained average recoveries for ATR, AME and PRO are 99.7±1.5, 98.4±4.7 and 97.0±4.4% in soil samples, 100.1±3.2, 100.7±3.4 and 96.4±3.8% in river sediment samples, and 100.1±3.5, 101.8±4.2 and 101.4±3.6% in wastewater samples, respectively. Furthermore, the accuracy and precision of the proposed method are evaluated with the elliptical joint confidence region (EJCR) test. It lights a new avenue to determine quantitatively herbicides in environmental samples with a simple pretreatment procedure and provides the scientific basis for an improved environment management through a better understanding of the wastewater-soil-river sediment system as a whole. PMID:20869500

  16. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov–Galerkin method

    PubMed Central

    Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.

    2014-01-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358

  17. Algorithms for Collision Detection Between a Point and a Moving Polygon, with Applications to Aircraft Weather Avoidance

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Hagen, George

    2016-01-01

    This paper proposes mathematical definitions of functions that can be used to detect future collisions between a point and a moving polygon. The intended application is weather avoidance, where the given point represents an aircraft and bounding polygons are chosen to model regions with bad weather. Other applications could possibly include avoiding other moving obstacles. The motivation for the functions presented here is safety, and therefore they have been proved to be mathematically correct. The functions are being developed for inclusion in NASA's Stratway software tool, which allows low-fidelity air traffic management concepts to be easily prototyped and quickly tested.

  18. Domain decomposition for the SPN solver MINOS

    SciTech Connect

    Jamelot, Erell; Baudron, Anne-Marie; Lautard, Jean-Jacques

    2012-07-01

    In this article we present a domain decomposition method for the mixed SPN equations, discretized with Raviart-Thomas-Nedelec finite elements. This domain decomposition is based on the iterative Schwarz algorithm with Robin interface conditions to handle communications. After having described this method, we give details on how to optimize the convergence. Finally, we give some numerical results computed in a realistic 3D domain. The computations are done with the MINOS solver of the APOLLO3 (R) code. (authors)

  19. Binary matrices, decomposition and multiply-add architectures

    NASA Astrophysics Data System (ADS)

    Sarukhanian, Hakob; Agaian, Sos S.; Astola, Jaakko T.; Egiazarian, Karen O.

    2003-05-01

    Binary matrices or (+/-1)-matrices have found numerous applications in coding, signal processing, and communications. In this paper, a general and efficient algorithm of decomposition of binary matrices is developed. As a special case, Hadamard matrices are considered. The proposed scheme requires no zero padding of the input data. The problem of the construction of 4n-point Hadamard transform is related to the Hadamard problem: the question of existence of Hadamard matrices. (It is not proved whether for every integer n, there exists an orthogonal 4n×4n matrix with elements +/-1). The number of real operation in developed algorithms is reduced from 0(N2) to 0(Nlog2N). Comparisons revealing the efficiency of the proposed algorithms with respect to the known ones are given. In particular, it is demonstrated that, in typical applications, the proposed algorithm I s more efficient than the conventional Walsh Hadamard transform. Note that for Hadamard matrices of orders >=96 the general algorithm is more efficient than the classical Walsh-Hadamard transform whose order is a power of two. The algorithm has a simple and symmetric structure. The results of numerical examples are presented.

  20. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  1. Improvement of registration accuracy in accelerated partial breast irradiation using the point-based rigid-body registration algorithm for patients with implanted fiducial markers

    SciTech Connect

    Inoue, Minoru; Yoshimura, Michio Sato, Sayaka; Nakamura, Mitsuhiro; Yamada, Masahiro; Hirata, Kimiko; Ogura, Masakazu; Hiraoka, Masahiro; Sasaki, Makoto; Fujimoto, Takahiro

    2015-04-15

    Purpose: To investigate image-registration errors when using fiducial markers with a manual method and the point-based rigid-body registration (PRBR) algorithm in accelerated partial breast irradiation (APBI) patients, with accompanying fiducial deviations. Methods: Twenty-two consecutive patients were enrolled in a prospective trial examining 10-fraction APBI. Titanium clips were implanted intraoperatively around the seroma in all patients. For image-registration, the positions of the clips in daily kV x-ray images were matched to those in the planning digitally reconstructed radiographs. Fiducial and gravity registration errors (FREs and GREs, respectively), representing resulting misalignments of the edge and center of the target, respectively, were compared between the manual and algorithm-based methods. Results: In total, 218 fractions were evaluated. Although the mean FRE/GRE values for the manual and algorithm-based methods were within 3 mm (2.3/1.7 and 1.3/0.4 mm, respectively), the percentages of fractions where FRE/GRE exceeded 3 mm using the manual and algorithm-based methods were 18.8%/7.3% and 0%/0%, respectively. Manual registration resulted in 18.6% of patients with fractions of FRE/GRE exceeding 5 mm. The patients with larger clip deviation had significantly more fractions showing large FRE/GRE using manual registration. Conclusions: For image-registration using fiducial markers in APBI, the manual registration results in more fractions with considerable registration error due to loss of fiducial objectivity resulting from their deviation. The authors recommend the PRBR algorithm as a safe and effective strategy for accurate, image-guided registration and PTV margin reduction.

  2. Gas leak localization and detection method based on a multi-point ultrasonic sensor array with TDOA algorithm

    NASA Astrophysics Data System (ADS)

    Tao, Wang; Dongying, Wang; Yu, Pei; Wei, Fan

    2015-09-01

    To resolve the measured target position to determine and locate leak problems with current gas leak detection and localization systems based on ultrasonic technology, this paper presents an improved multi-array ultrasonic gas leak TDOA (time difference of arrival) localization and detection method. This method involves arranging ultrasonic transducers at equal intervals in a high-sensitivity detector array, using small differences in ultrasonic sound intensity to determine the scope of the leak and generate a rough localization, and then using an array TDOA localization algorithm to determine the precise leak location. This method is then implemented in an ultrasonic leak detection and localization system. Experimental results showed that the TDOA localization method, using auxiliary sound intensity factors to avoid dependence on a single sound intensity to determine the leak size and location, achieved a localization error of less than 2 mm. The validity and correctness of this approach were thus verified.

  3. Proper orthogonal decomposition of flow-field in non-stationary geometry

    NASA Astrophysics Data System (ADS)

    Troshin, Victor; Seifert, Avi; Sidilkover, David; Tadmor, Gilead

    2016-04-01

    The current paper outlines a proper orthogonal decomposition (POD) methodology for a flow field in a domain with moving boundaries. In the standard POD approach the properties of the region of the domain, which alternatingly occupied by fluid and solid, are not defined. Here, prior to the decomposition, the domain with moving or deforming boundaries is mapped to a stationary domain using volume preserving mapping. This mapping was created by combining a transfinite interpolation and volume adjustment algorithm. The algorithm is based on an iterative solution of the Laplace equation with respect to the displacement potential of the grid points. Finally the method is demonstrated on CFD results of pitching and plunging ellipse in still fluid.

  4. Proper Orthogonal Decomposition of Flow-Field in Non-Stationary Geometry

    NASA Astrophysics Data System (ADS)

    Troshin, Victor; Seifert, Avraham; Sidilkover, David; Tadmor, Gilead

    2015-11-01

    This work presents a proper orthogonal decomposition (POD) methodology for a flow field in a domain with moving boundaries. A relatively simple volume preserving mapping which transforms a deforming to stationary domain is described. This mapping was created by combining a transfinite interpolation and volume adjustment algorithm. The algorithm is based on iterative solution of the Laplace equation with respect to the displacement potential of the grid points. The transformed domain is suitable for proper orthogonal decomposition procedure. The presented mapping can be applied to a wide variety of flow problems which contain single or in some cases multiple deforming boundaries. Currently, this method is presented for 2D geometries, however, it can be expanded to 3D cases. This approach can assist in creation of low order models for complex aero-elastic systems which to date could not be analysed by existing POD approaches. Finally, the method is demonstrated on CFD results of pitching and plunging ellipse in still fluid.

  5. New Advances In Multiphase Flow Numerical Modelling Using A General Domain Decomposition and Non-orthogonal Collocated Finite Volume Algorithm: Application To Industrial Fluid Catalytical Cracking Process and Large Scale Geophysical Fluids.

    NASA Astrophysics Data System (ADS)

    Martin, R.; Gonzalez Ortiz, A.

    momentum exchange forces and the interphase heat exchanges are 1 treated implicitly to ensure stability. In order to reduce one more time the computa- tional cost, a decomposition of the global domain in N subdomains is introduced and all the previous algorithms applied to one block is performed in each block. At the in- terface between subdomains, an overlapping procedure is used. Another advantage is that different sets of equations can be solved in each block like fluid/structure interac- tions for instance. We show here the hydrodynamics of a two-phase flow in a vertical conduct as in industrial plants of fluid catalytical cracking processes with a complex geometry. With an initial Richardson number of 0.16 slightly higher than the critical Richardson number of 0.1, particles and water vapor are injected at the bottom of the riser. Countercurrents appear near the walls and gravity effects begin to dominate in- ducing an increase of particulate volumic fractions near the walls. We show here the hydrodynamics for 13s. 2

  6. Feature-Based Quality Evaluation of 3d Point Clouds - Study of the Performance of 3d Registration Algorithms

    NASA Astrophysics Data System (ADS)

    Ridene, T.; Goulette, F.; Chendeb, S.

    2013-08-01

    The production of realistic 3D map databases is continuously growing. We studied an approach of 3D mapping database producing based on the fusion of heterogeneous 3D data. In this term, a rigid registration process was performed. Before starting the modeling process, we need to validate the quality of the registration results, and this is one of the most difficult and open research problems. In this paper, we suggest a new method of evaluation of 3D point clouds based on feature extraction and comparison with a 2D reference model. This method is based on tow metrics: binary and fuzzy.

  7. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; O'Brien, Ricky T.; Rugaard Poulsen, Per; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  8. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm.

    PubMed

    Tehrani, Joubin Nasehi; O'Brien, Ricky T; Poulsen, Per Rugaard; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  9. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  10. Efficient and accurate computation of generalized singular-value decompositions

    NASA Astrophysics Data System (ADS)

    Drmac, Zlatko

    2001-11-01

    We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.

  11. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  12. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  13. A new eddy-covariance method using empirical mode decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We introduce a new eddy-covariance method that uses a spectral decomposition algorithm called empirical mode decomposition. The technique is able to calculate contributions to near-surface fluxes from different periodic components. Unlike traditional Fourier methods, this method allows for non-ortho...

  14. 3D shape decomposition and comparison for gallbladder modeling

    NASA Astrophysics Data System (ADS)

    Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen

    2011-03-01

    This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.

  15. Protein Domain Decomposition Using a Graph-Theoretic Approach

    SciTech Connect

    Xu, Y.; Xu, D.; Gabow, H.N.

    2000-08-20

    This paper presents a new algorithm for the decomposition of a multi-domain protein into individual structural domains. The underlying principle used is that residue-residue contacts are denser within a domain than between domains.

  16. Autonomous Gaussian Decomposition

    NASA Astrophysics Data System (ADS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-04-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  17. Target Decomposition Techniques & Role of Classification Methods for Landcover Classification

    NASA Astrophysics Data System (ADS)

    Singh, Dharmendra; Mittal, Gunjan

    Target decomposition techniques aims at analyzing the received scattering matrix from polari-metric data to extract information about the scattering processes. Incoherent techniques have been modeled in recent years for providing more general approach for decomposition of natural targets. Therefore, there is a need to study and critically analyze the developing models for their suitability in classification of land covers. Moreover, the classification methods used for the segmentation of various landcovers from the decomposition techniques need to be examined as the appropriate selection of these methods affect the performance of the decomposition tech-niques for landcover classification. Therefore in the present paper, it is attempted to check the performance of various model based and an eigen vector based decomposition techniques for decomposition of Polarimetric PALSAR (Phased array type L band SAR) data. Few generic supervised classifiers were used for classification of decomposed images into three broad classes of water, urban and agriculture lands. For the purpose, algorithms had been applied twice on pre-processed PALSAR raw data once on spatial averaged (mean filtering on 33 window) data and the other on data, multilooked in azimuth direction by six looks and then filtered using Wishart Gamma MAP on 55 window. Classification of the decomposed images from each of the methods had been done using four supervised classifiers (parallelepiped, minimum distance, Mahalanobis and maximum likelihood). Ground truth data generated with the help of ground survey points, topographic sheet and google earth was used for the computation of classification accuracy. Parallelepiped classifier gave better classification accuracy of water class for all the models excluding H/A/Alpha. Minimum distance classifier gave better classification results for urban class. Maximum likelihood classifier performed well as compared to other classifiers for classification of vegetation class

  18. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. PMID:23702438

  19. Image encryption using P-Fibonacci transform and decomposition

    NASA Astrophysics Data System (ADS)

    Zhou, Yicong; Panetta, Karen; Agaian, Sos; Chen, C. L. Philip

    2012-03-01

    Image encryption is an effective method to protect images or videos by transferring them into unrecognizable formats for different security purposes. To improve the security level of bit-plane decomposition based encryption approaches, this paper introduces a new image encryption algorithm by using a combination of parametric bit-plane decomposition along with bit-plane shuffling and resizing, pixel scrambling and data mapping. The algorithm utilizes the Fibonacci P-code for image bit-plane decomposition and the 2D P-Fibonacci transform for image encryption because they are parameter dependent. Any new or existing method can be used for shuffling the order of the bit-planes. Simulation analysis and comparisons are provided to demonstrate the algorithm's performance for image encryption. Security analysis shows the algorithm's ability against several common attacks. The algorithm can be used to encrypt images, biometrics and videos.

  20. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  1. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download. PMID:26465549

  2. Anisotropic finite strain viscoelasticity based on the Sidoroff multiplicative decomposition and logarithmic strains

    NASA Astrophysics Data System (ADS)

    Latorre, Marcos; Montáns, Francisco Javier

    2015-09-01

    In this paper a purely phenomenological formulation and finite element numerical implementation for quasi-incompressible transversely isotropic and orthotropic materials is presented. The stored energy is composed of distinct anisotropic equilibrated and non-equilibrated parts. The nonequilibrated strains are obtained from the multiplicative decomposition of the deformation gradient. The procedure can be considered as an extension of the Reese and Govindjee framework to anisotropic materials and reduces to such formulation for isotropic materials. The stress-point algorithmic implementation is based on an elastic-predictor viscous-corrector algorithm similar to that employed in plasticity. The consistent tangent moduli for the general anisotropic case are also derived. Numerical examples explain the procedure to obtain the material parameters, show the quadratic convergence of the algorithm and usefulness in multiaxial loading. One example also highlights the importance of prescribing a complete set of stress-strain curves in orthotropic materials.

  3. Characterizing and correcting for the effect of sensor noise in the dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Dawson, Scott; Hemati, Maziar; Williams, Matthew; Rowley, Clarence

    2014-11-01

    Dynamic mode decomposition (DMD) provides a powerful means of extracting insightful dynamical information from fluids datasets. Like any data processing technique, DMD's usefulness relies on its ability to extract real and accurate dynamical features from noise-corrupted data. Here we show analytically that sensor noise can bias the results (eigenvalues and modes) of the DMD algorithm. This bias can be accurately predicted, to the point that we may derive an analytic correction factor that facilitates its removal. We propose a number of additional modifications to the DMD algorithm that reduce or eliminate this bias, even when the noise characteristics are unknown. We demonstrate the performance of these modifications on a range of synthetic, numerical, and experimental datasets, and also compare and integrate our modified algorithms with other DMD variants proposed in recent literature. This work was supported by the Air Force Office of Scientific Research, under Award No. FA9550-12-1-0075.

  4. Analysis and Application of LIDAR Waveform Data Using a Progressive Waveform Decomposition Method

    NASA Astrophysics Data System (ADS)

    Zhu, J.; Zhang, Z.; Hu, X.; Li, Z.

    2011-09-01

    Due to rich information of a full waveform of airborne LiDAR (light detection and ranging) data, the analysis of full waveform has been an active area in LiDAR application. It is possible to digitally sample and store the entire reflected waveform of small-footprint instead of only discrete point clouds. Decomposition of waveform data, a key step in waveform data analysis, can be categorized to two typical methods: 1) the Gaussian modelling method such as the Non-linear least-squares (NLS) algorithm and the maximum likelihood estimation using the Exception Maximization (EM) algorithm. 2) pulse detection method—Average Square Difference Function (ASDF). However, the Gaussian modelling methods strongly rely on initial parameters, whereas the ASDF omits the importance of parameter information of the waveform. In this paper, we proposed a fast algorithm—Progressive Waveform Decomposition (PWD) method to extract local maxims and fit the echo with Gaussian function, and calculate other parameters from the raw waveform data. On the one hand, experiments are implemented to evaluate the PWD method and the results demonstrate its robustness and efficiency. On the other hand, with the PWD parametric analysis of the full-waveform instead of a 3D point cloud, some special applications are investigated afterward.

  5. Dynamic reconstruction of sub-sampled data using Optimal Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Krol, Jakub; Wynn, Andrew

    2015-11-01

    The Nyquist-Shannon criterion indicates the sample rate necessary to identify information with particular frequency content from a dynamical system. However, in experimental applications such as the interrogation of a flow field using Particle Image Velocimetry (PIV), it may be expensive to obtain data at the desired temporal resolution. To address this problem, we propose a new approach to identify temporal information from undersampled data, using ideas from modal decomposition algorithms such as Dynamic Mode Decomposition (DMD) and Optimal Mode Decomposition (OMD). The novel method takes a vector-valued signal sampled at random time instances (but at Sub-Nyquist rate) and projects onto a low-order subspace. Subsequently, dynamical characteristics are approximated by iteratively approximating the flow evolution by a low order model and solving a certain convex optimization problem. Furthermore, it is shown that constraints may be added to the optimization problem to improve spatial resolution of missing data points. The methodology is demonstrated on two dynamical systems, a cylinder flow at Re = 60 and Kuramoto-Sivashinsky equation. In both cases the algorithm correctly identifies the characteristic frequencies and oscillatory structures present in the flow.

  6. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  7. Image super-resolution based on image adaptive decomposition

    NASA Astrophysics Data System (ADS)

    Xie, Qiwei; Wang, Haiyan; Shen, Lijun; Chen, Xi; Han, Hua

    2011-11-01

    In this paper we propose an image super-resolution algorithm based on Gaussian Mixture Model (GMM) and a new adaptive image decomposition algorithm. The new image decomposition algorithm uses local extreme of image to extract the cartoon and oscillating part of image. In this paper, we first decompose an image into oscillating and piecewise smooth (cartoon) parts, then enlarge the cartoon part with interpolation. Because GMM accurately characterizes the oscillating part, we specify it as the prior distribution and then formulate the image super-resolution problem as a constrained optimization problem to acquire the enlarged texture part and finally we obtain a fine result.

  8. The comparison of algorithms for key points extraction in simplification of hybrid digital terrain models. (Polish Title: Porównanie algorytmów ekstrakcji punktów istotnych w upraszczaniu numerycznych modeli terenu o strukturze hybrydowej)

    NASA Astrophysics Data System (ADS)

    Bakuła, K.

    2014-12-01

    The presented research concerns methods related to reduction of elevation data contained in digital terrain model (DTM) from airborne laser scanning (ALS) in hydraulic modelling. The reduction is necessary in the preparation of large datasets of geospatial data describing terrain re lief. Its course should not be associated with regular data filtering, which o ften occurs in practice. Such a method leads to a number of important forms important for hydraulic modeling being missed. One of the proposed solutions for the reduction of elevation data contained in DTM is to change the regular grid into the hybrid structure with regularly distributed points and irregularly located critical points. The purpose of this paper is to compare algorithms for extracting these key points from DTM. They are used in hybrid mod el generation as a part of elevation data reduction process that retains DTM accuracy and reduces the size of output files. In experiments, the following algorithms were tested: Topographic Position Index (TPI), Very Important Points (VIP) and Z - tolerance. Their effectiveness in reduction (maintaining the accuracy and reducing datasets) was evaluated in respect to input DTM from ALS. The best results were obtained for the Z - tolerance algorithm, but they do not diminish the capabilities of the other two algorithms: VIP and TPI which can generalize DTM quite well. The results confirm the possibility of obtaining a high degree of reduction reaching only a few percent of the input data with a relatively low decrease of vertical DTM accuracy to a few centimetres.

  9. Analyzing algorithms for nonlinear and spatially nonuniform phase shifts in the liquid crystal point diffraction interferometer. 1998 summer research program for high school juniors at the University of Rochester`s Laboratory for Laser Energetics: Student research reports

    SciTech Connect

    Jain, N.

    1999-03-01

    Phase-shifting interferometry has many advantages, and the phase shifting nature of the Liquid Crystal Point Diffraction Interferometer (LCPDI) promises to provide significant improvement over other current OMEGA wavefront sensors. However, while phase-shifting capabilities improve its accuracy as an interferometer, phase-shifting itself introduces errors. Phase-shifting algorithms are designed to eliminate certain types of phase-shift errors, and it is important to chose an algorithm that is best suited for use with the LCPDI. Using polarization microscopy, the authors have observed a correlation between LC alignment around the microsphere and fringe behavior. After designing a procedure to compare phase-shifting algorithms, they were able to predict the accuracy of two particular algorithms through computer modeling of device-specific phase shift-errors.

  10. Orthogonal tensor decompositions

    SciTech Connect

    Tamara G. Kolda

    2000-03-01

    The authors explore the orthogonal decomposition of tensors (also known as multi-dimensional arrays or n-way arrays) using two different definitions of orthogonality. They present numerous examples to illustrate the difficulties in understanding such decompositions. They conclude with a counterexample to a tensor extension of the Eckart-Young SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl. 269(1998):307--329].

  11. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. PMID:26133418

  12. Operationalizing hippocampal volume as an enrichment biomarker for amnestic MCI trials: effect of algorithm, test-retest variability and cut-point on trial cost, duration and sample size

    PubMed Central

    Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.

    2014-01-01

    Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008

  13. Tomographic resolution without singular value decomposition

    SciTech Connect

    Berryman, J.G.

    1994-06-01

    An explicit procedure is presented for computing both model and data resolution matrices within a Paige-Saunders LSQR algorithm for iterative inversion in seismic tomography. These methods are designed to avoid the need for an additional singular value decomposition of the ray-path matrix. The techniques discussed are completely general since they are based on the multiplicity of equivalent exact formulas that may be used to define the resolution matrices. Thus, resolution matrices may also be computed for a wide variety of iterative inversion algorithms using the same ideas.

  14. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  15. LU and Cholesky decomposition on an optical systolic array processor

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1983-01-01

    Direct solutions of matrix-vector equations on an optical systolic array processor are considered. The solutions are discussed and a parallel algorithm for LU matrix decomposition that is very attractive for an optical realization is formulated. It is noted that when direct techniques are used, it is preferable to realize the matrix decomposition on an optical system and to utilize a digital processor for the solution of the simplified resultant matrix-vector problem. One method of realizing LU matrix decomposition on a new frequency-multiplexed optical systolic array matrix-matrix processor is described. A simple method for extending the process of LU decomposition to Cholesky decomposition on the optical processor is discussed.

  16. Edge-preserving smoothing for image decomposition via a hybrid approach

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Liu, Hongzhi; Wu, Zhonghai

    2014-01-01

    Edge-preserving smoothing is crucial for image decomposition to extract the base layer. However, current methods fail to smooth high-contrast details or preserve thin edges due to their single criterion for distinguishing edges and details. In this paper, we present a hybrid definition for salient edges using two properties: intensity amplitude and oscillations density. Based on this definition, we propose an edge-preserving image smoothing algorithm. Firstly local extrema of the input image are located. Then these extrema points are classified into edge or detail points by the two properties. Thirdly, max and min envelops are obtained by an optimizing process with edge points as constrains. Lastly, the smoothing result is obtained by an averaging operation. Experimental results show that the proposed method can preserve salient step edges while smoothing high-contrast details and is useful in many applications such as image enhancement and hatch-to-tone mapping.

  17. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  18. NTO decomposition studies

    SciTech Connect

    Oxley, J.C.; Smith, J.L.; Yeager, K.E.; Rogers, E.; Dong, X.X.

    1996-07-01

    To examine the thermal decomposition of 5-nitro-2,4-dihydro-3H-1,2,4-triazol-3-one (NTO) in detail, isotopic labeling studies were undertaken. NTO samples labeled with {sup 15}N in three different locations [N(1) and N(2), N(4), and N(6)] were prepared. Upon thermolysis, the majority of the NTO condensed-phase product was a brown, insoluble residue, but small quantities of 2,4-dihydro-3H-1,2,4-triazol-3-one (TO) and triazole were detected. Gases comprised the remainder of the NTO decomposition products. The analysis of these gases is reported along with mechanistic implications of these observations.

  19. Feature based volume decomposition for automatic hexahedral mesh generation

    SciTech Connect

    LU,YONG; GADH,RAJIT; TAUTGES,TIMOTHY J.

    2000-02-21

    Much progress has been made through these years to achieve automatic hexahedral mesh generation. While general meshing algorithms that can take on general geometry are not there yet; many well-proven automatic meshing algorithms now work on certain classes of geometry. This paper presents a feature based volume decomposition approach for automatic Hexahedral Mesh generation. In this approach, feature recognition techniques are introduced to determine decomposition features from a CAD model. The features are then decomposed and mapped with appropriate automatic meshing algorithms suitable for the correspondent geometry. Thus a formerly unmeshable CAD model may become meshable. The procedure of feature decomposition is recursive: sub-models are further decomposed until either they are matched with appropriate meshing algorithms or no more decomposition features are detected. The feature recognition methods employed are convexity based and use topology and geometry information, which is generally available in BREP solid models. The operations of volume decomposition are also detailed in the paper. The final section, the capability of the feature decomposer is demonstrated over some complicated manufactured parts.

  20. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this

  1. Linear array for covariance differencing via hyperbolic singular value decomposition

    NASA Astrophysics Data System (ADS)

    Bojanczyk, A. W.; Steinhardt, A. O.

    1989-11-01

    We consider a problem pertaining to bearing estimation in unknown noise using the covariance differencing approach, and propose a linear array of processors which exhibits a linear speed-up with respect to a uniprocessor system. Our solution hinges on a new canonic matrix factorization which we term the hyperbolic singular value decomposition. The parallel algorithm for hyperbolic SVD based bearing estimation is an adaptation of a well known biorthogonalization technique developed by Hestenes. Parallel implementations of the algorithm are based on earlier works on one-sided Jacobi methods. It turns out that strategies for parallelization of Jacobi methods are equally well applicable for computing the hyperbolic singular value decomposition.

  2. A shape decomposition technique in electrical impedance tomography

    SciTech Connect

    Han, D.K.; Prosperetti, A.

    1999-10-10

    Consider a two-dimensional domain containing a medium with unit electrical conductivity and one or more non-conducting objects. The problem considered here is that of identifying shape and position of the objects on the sole basis of measurements on the external boundary of the domain. An iterative technique is presented in which a sequence of solutions of the direct problem is generated by a boundary element method on the basis of assumed positions and shapes of the objects. The key new aspect of the approach is that the boundary of each object is represented in terms of Fourier coefficients rather than a point-wise discretization. These Fourier coefficients generate the fundamental shapes mentioned in the title in terms of which the object shape is decomposed. The iterative procedure consists in the successive updating of the Fourier coefficients at every step by means of the Levenberg-Marquardt algorithm. It is shown that the Fourier decomposition--which, essentially, amounts to a form of image compression--enables the algorithm to image the embedded objects with unprecedented accuracy and clarity. In a separate paper, the method has also been extended to three dimensions with equally good results.

  3. Finite-precision arithmetic in singular-value decomposition architectures

    SciTech Connect

    Duryea, R.A.

    1987-01-01

    The singular-value decomposition (SVD) is an important matrix algorithm that has many applications in signal processing. However, its use has been limited due to its computational complexity. Several architecture have been proposed to compute the SVD using arrays of parallel processors. This thesis derives requirements for the precision of arithmetic units (AUs) used in SVD arrays and compares the resource requirements of several architectures. The author's results are based on the assumption that he is operating on matrices of quantized data. Since the matrices have quantization errors, he shows that their singular values will have quantization errors as large as the data errors. To compute the number of bits needed in SVD AUs, it is required that the AUs have enough bits to keep the round-off errors of the SVD computation smaller than the quantization errors. The analysis shows that we need essentially the same number of bits for either the Hestenes of Jacobi SVD algorithms. Five SVD architectures, two linear structures and three quadratic arrays are described and their resource requirements are compared with floating point and CORDIC AUs. The comparison shows the total resource requirements of the linear designs to be lower than that of the quadratic arrays for all-size matrices.

  4. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  5. Hydrazine decomposition and other reactions

    NASA Technical Reports Server (NTRS)

    Armstrong, Warren E. (Inventor); La France, Donald S. (Inventor); Voge, Hervey H. (Inventor)

    1978-01-01

    This invention relates to the catalytic decomposition of hydrazine, catalysts useful for this decomposition and other reactions, and to reactions in hydrogen atmospheres generally using carbon-containing catalysts.

  6. Generalized spectral decomposition for stochastic nonlinear problems

    SciTech Connect

    Nouy, Anthony Le Maitre, Olivier P.

    2009-01-10

    We present an extension of the generalized spectral decomposition method for the resolution of nonlinear stochastic problems. The method consists in the construction of a reduced basis approximation of the Galerkin solution and is independent of the stochastic discretization selected (polynomial chaos, stochastic multi-element or multi-wavelets). Two algorithms are proposed for the sequential construction of the successive generalized spectral modes. They involve decoupled resolutions of a series of deterministic and low-dimensional stochastic problems. Compared to the classical Galerkin method, the algorithms allow for significant computational savings and require minor adaptations of the deterministic codes. The methodology is detailed and tested on two model problems, the one-dimensional steady viscous Burgers equation and a two-dimensional nonlinear diffusion problem. These examples demonstrate the effectiveness of the proposed algorithms which exhibit convergence rates with the number of modes essentially dependent on the spectrum of the stochastic solution but independent of the dimension of the stochastic approximation space.

  7. Volume Decomposition and Feature Recognition for Hexahedral Mesh Generation

    SciTech Connect

    GADH,RAJIT; LU,YONG; TAUTGES,TIMOTHY J.

    1999-09-27

    Considerable progress has been made on automatic hexahedral mesh generation in recent years. Several automatic meshing algorithms have proven to be very reliable on certain classes of geometry. While it is always worth pursuing general algorithms viable on more general geometry, a combination of the well-established algorithms is ready to take on classes of complicated geometry. By partitioning the entire geometry into meshable pieces matched with appropriate meshing algorithm the original geometry becomes meshable and may achieve better mesh quality. Each meshable portion is recognized as a meshing feature. This paper, which is a part of the feature based meshing methodology, presents the work on shape recognition and volume decomposition to automatically decompose a CAD model into meshable volumes. There are four phases in this approach: (1) Feature Determination to extinct decomposition features, (2) Cutting Surfaces Generation to form the ''tailored'' cutting surfaces, (3) Body Decomposition to get the imprinted volumes; and (4) Meshing Algorithm Assignment to match volumes decomposed with appropriate meshing algorithms. The feature determination procedure is based on the CLoop feature recognition algorithm that is extended to be more general. Results are demonstrated over several parts with complicated topology and geometry.

  8. Sparse decomposition learning based dynamic MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Zhu, Peifei; Zhang, Qieshi; Kamata, Sei-ichiro

    2015-02-01

    Dynamic MRI is widely used for many clinical exams but slow data acquisition becomes a serious problem. The application of Compressed Sensing (CS) demonstrated great potential to increase imaging speed. However, the performance of CS is largely depending on the sparsity of image sequence in the transform domain, where there are still a lot to be improved. In this work, the sparsity is exploited by proposed Sparse Decomposition Learning (SDL) algorithm, which is a combination of low-rank plus sparsity and Blind Compressed Sensing (BCS). With this decomposition, only sparsity component is modeled as a sparse linear combination of temporal basis functions. This enables coefficients to be sparser and remain more details of dynamic components comparing learning the whole images. A reconstruction is performed on the undersampled data where joint multicoil data consistency is enforced by combing Parallel Imaging (PI). The experimental results show the proposed methods decrease about 15~20% of Mean Square Error (MSE) compared to other existing methods.

  9. Hierarchical decomposition model for reconfigurable architecture

    NASA Astrophysics Data System (ADS)

    Erdogan, Simsek; Wahab, Abdul

    1996-10-01

    This paper introduces a systematic approach for abstract modeling of VLSI digital systems using a hierarchical decomposition process and HDL. In particular, the modeling of the back propagation neural network on a massively parallel reconfigurable hardware is used to illustrate the design process rather than toy examples. Based on the design specification of the algorithm, a functional model is developed through successive refinement and decomposition for execution on the reconfiguration machine. First, a top- level block diagram of the system is derived. Then, a schematic sheet of the corresponding structural model is developed to show the interconnections of the main functional building blocks. Next, the functional blocks are decomposed iteratively as required. Finally, the blocks are modeled using HDL and verified against the block specifications.

  10. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  11. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    SciTech Connect

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-15

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart–Thomas–Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code.

  12. Algorithms for propagating uncertainty across heterogeneous domains

    SciTech Connect

    Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.

    2015-12-30

    We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.

  13. Embedding color watermarks in color images based on Schur decomposition

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a blind dual color image watermarking scheme based on Schur decomposition is introduced. This is the first time to use Schur decomposition to embed color image watermark in color host image, which is different from using the binary image as watermark. By analyzing the 4 × 4 unitary matrix U via Schur decomposition, we can find that there is a strong correlation between the second row first column element and the third row first column element. This property can be explored for embedding watermark and extracting watermark in the blind manner. Since Schur decomposition is an intermediate step in SVD decomposition, the proposed method requires less number of computations. Experimental results show that the proposed scheme is robust against most common attacks including JPEG lossy compression, JPEG 2000 compression, low-pass filtering, cropping, noise addition, blurring, rotation, scaling and sharpening et al. Moreover, the proposed algorithm outperforms the closely related SVD-based algorithm and the spatial-domain algorithm.

  14. Low complexity interference alignment algorithms for desired signal power maximization problem of MIMO channels

    NASA Astrophysics Data System (ADS)

    Sun, Cong; Yang, Yunchuan; Yuan, Yaxiang

    2012-12-01

    In this article, we investigate the interference alignment (IA) solution for a K-user MIMO interference channel. Proper users' precoders and decoders are designed through a desired signal power maximization model with IA conditions as constraints, which forms a complex matrix optimization problem. We propose two low complexity algorithms, both of which apply the Courant penalty function technique to combine the leakage interference and the desired signal power together as the new objective function. The first proposed algorithm is the modified alternating minimization algorithm (MAMA), where each subproblem has closed-form solution with an eigenvalue decomposition. To further reduce algorithm complexity, we propose a hybrid algorithm which consists of two parts. As the first part, the algorithm iterates with Householder transformation to preserve the orthogonality of precoders and decoders. In each iteration, the matrix optimization problem is considered in a sequence of 2D subspaces, which leads to one dimensional optimization subproblems. From any initial point, this algorithm obtains precoders and decoders with low leakage interference in short time. In the second part, to exploit the advantage of MAMA, it continues to iterate to perfectly align the interference from the output point of the first part. Analysis shows that in one iteration generally both proposed two algorithms have lower computational complexity than the existed maximum signal power (MSP) algorithm, and the hybrid algorithm enjoys lower complexity than MAMA. Simulations reveal that both proposed algorithms achieve similar performances as the MSP algorithm with less executing time, and show better performances than the existed alternating minimization algorithm in terms of sum rate. Besides, from the view of convergence rate, simulation results show that the MAMA enjoys fastest speed with respect to a certain sum rate value, while hybrid algorithm converges fastest to eliminate interference.

  15. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  16. Composite structured mesh generation with automatic domain decomposition in complex geometries

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents a novel automatic domain decomposition method to generate quality composite structured meshes in complex domains with arbitrary shapes, in which quality structured mesh generation still remains a challenge. The proposed decomposition algorithm is based on the analysis of an initi...

  17. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  18. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  19. Detailed Chemical Kinetic Modeling of Hydrazine Decomposition

    NASA Technical Reports Server (NTRS)

    Meagher, Nancy E.; Bates, Kami R.

    2000-01-01

    The purpose of this research project is to develop and validate a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. Hydrazine is used extensively in aerospace propulsion, and although liquid hydrazine is not considered detonable, many fuel handling systems create multiphase mixtures of fuels and fuel vapors during their operation. Therefore, a thorough knowledge of the decomposition chemistry of hydrazine under a variety of conditions can be of value in assessing potential operational hazards in hydrazine fuel systems. To gain such knowledge, a reasonable starting point is the development and validation of a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. A reasonably complete mechanism was published in 1996, however, many of the elementary steps included had outdated rate expressions and a thorough investigation of the behavior of the mechanism under a variety of conditions was not presented. The current work has included substantial revision of the previously published mechanism, along with a more extensive examination of the decomposition behavior of hydrazine. An attempt to validate the mechanism against the limited experimental data available has been made and was moderately successful. Further computational and experimental research into the chemistry of this fuel needs to be completed.

  20. Operationalizing hippocampal volume as an enrichment biomarker for amnestic mild cognitive impairment trials: effect of algorithm, test-retest variability, and cut point on trial cost, duration, and sample size.

    PubMed

    Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J

    2014-04-01

    The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations. PMID:24211008

  1. Iterative image-domain decomposition for dual-energy CT

    SciTech Connect

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the

  2. Decomposition of Variance for Spatial Cox Processes

    PubMed Central

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2012-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees. PMID:23599558

  3. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  4. Fulvenallene decomposition kinetics.

    PubMed

    Polino, Daniela; Cavallotti, Carlo

    2011-09-22

    While the decomposition kinetics of the benzyl radical has been studied in depth both from the experimental and the theoretical standpoint, much less is known about the reactivity of what is likely to be its main decomposition product, fulvenallene. In this work the high temperature reactivity of fulvenallene was investigated on a Potential Energy Surface (PES) consisting of 10 wells interconnected through 11 transition states using a 1 D Master Equation (ME). Rate constants were calculated using RRKM theory and the ME was integrated using a stochastic kinetic Monte Carlo code. It was found that two main decomposition channels are possible, the first is active on the singlet PES and leads to the formation of the fulvenallenyl radical and atomic hydrogen. The second requires intersystem crossing to the triplet PES and leads to acetylene and cyclopentadienylidene. ME simulations were performed calculating the microcanonical intersystem crossing frequency using Landau-Zener theory convoluting the crossing probability with RRKM rates evaluated at the conical intersection. It was found that the reaction channel leading to the cyclopentadienylidene diradical is only slightly faster than that leading to the fulvenallenyl radical, so that it can be concluded that both reactions are likely to be active in the investigated temperature (1500-2000 K) and pressure (0.05-50 bar) ranges. However, the simulations show that intersystem crossing is rate limiting for the first reaction channel, as the removal of this barrier leads to an increase of the rate constant by a factor of 2-3. Channel specific rate constants are reported as a function of temperature and pressure. PMID:21819060

  5. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  6. Hydrogen peroxide catalytic decomposition

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor)

    2010-01-01

    Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous process preceding decomposition in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation.

  7. Mode decomposition evolution equations

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be

  8. 3D building reconstruction from ALS data using unambiguous decomposition into elementary structures

    NASA Astrophysics Data System (ADS)

    Jarząbek-Rychard, M.; Borkowski, A.

    2016-08-01

    The objective of the paper is to develop an automated method that enables for the recognition and semantic interpretation of topological building structures. The novelty of the proposed modeling approach is an unambiguous decomposition of complex objects into predefined simple parametric structures, resulting in the reconstruction of one topological unit without independent overlapping elements. The aim of a data processing chain is to generate complete polyhedral models at LOD2 with an explicit topological structure and semantic information. The algorithms are performed on 3D point clouds acquired by airborne laser scanning. The presented methodology combines data-based information reflected in an attributed roof topology graph with common knowledge about buildings stored in a library of elementary structures. In order to achieve an appropriate balance between reconstruction precision and visualization aspects, the implemented library contains a set of structure-depended soft modeling rules instead of strictly defined geometric primitives. The proposed modeling algorithm starts with roof plane extraction performed by the segmentation of building point clouds, followed by topology identification and recognition of predefined structures. We evaluate the performance of the novel procedure by the analysis of the modeling accuracy and the degree of modeling detail. The assessment according to the validation methods standardized by the International Society for Photogrammetry and Remote Sensing shows that the completeness of the algorithm is above 80%, whereas the correctness exceeds 98%.

  9. An Automated Three-Dimensional Detection and Segmentation Method for Touching Cells by Integrating Concave Points Clustering and Random Walker Algorithm

    PubMed Central

    Gong, Hui; Chen, Shangbin; Zhang, Bin; Ding, Wenxiang; Luo, Qingming; Li, Anan

    2014-01-01

    Characterizing cytoarchitecture is crucial for understanding brain functions and neural diseases. In neuroanatomy, it is an important task to accurately extract cell populations' centroids and contours. Recent advances have permitted imaging at single cell resolution for an entire mouse brain using the Nissl staining method. However, it is difficult to precisely segment numerous cells, especially those cells touching each other. As presented herein, we have developed an automated three-dimensional detection and segmentation method applied to the Nissl staining data, with the following two key steps: 1) concave points clustering to determine the seed points of touching cells; and 2) random walker segmentation to obtain cell contours. Also, we have evaluated the performance of our proposed method with several mouse brain datasets, which were captured with the micro-optical sectioning tomography imaging system, and the datasets include closely touching cells. Comparing with traditional detection and segmentation methods, our approach shows promising detection accuracy and high robustness. PMID:25111442

  10. Hydrogen iodide decomposition

    DOEpatents

    O'Keefe, Dennis R.; Norman, John H.

    1983-01-01

    Liquid hydrogen iodide is decomposed to form hydrogen and iodine in the presence of water using a soluble catalyst. Decomposition is carried out at a temperature between about 350.degree. K. and about 525.degree. K. and at a corresponding pressure between about 25 and about 300 atmospheres in the presence of an aqueous solution which acts as a carrier for the homogeneous catalyst. Various halides of the platinum group metals, particularly Pd, Rh and Pt, are used, particularly the chlorides and iodides which exhibit good solubility. After separation of the H.sub.2, the stream from the decomposer is countercurrently extracted with nearly dry HI to remove I.sub.2. The wet phase contains most of the catalyst and is recycled directly to the decomposition step. The catalyst in the remaining almost dry HI-I.sub.2 phase is then extracted into a wet phase which is also recycled. The catalyst-free HI-I.sub.2 phase is finally distilled to separate the HI and I.sub.2. The HI is recycled to the reactor; the I.sub.2 is returned to a reactor operating in accordance with the Bunsen equation to create more HI.

  11. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  12. Coxeter decompositions of hyperbolic simplexes

    SciTech Connect

    Felikson, A A

    2002-12-31

    A Coxeter decomposition of a polyhedron in a hyperbolic space H{sup n} is a decomposition of it into finitely many Coxeter polyhedra such that any two tiles having a common facet are symmetric with respect to it. The classification of Coxeter decompositions is closely related to the problem of the classification of finite-index subgroups generated by reflections in discrete hyperbolic groups generated by reflections. All Coxeter decompositions of simplexes in the hyperbolic spaces H{sup n} with n>3 are described in this paper.

  13. A stable elemental decomposition for dynamic process optimization

    NASA Astrophysics Data System (ADS)

    Cervantes, Arturo M.; Biegler, Lorenz T.

    2000-08-01

    In Cervantes and Biegler (A.I.Ch.E.J. 44 (1998) 1038), we presented a simultaneous nonlinear programming problem (NLP) formulation for the solution of DAE optimization problems. Here, by applying collocation on finite elements, the DAE system is transformed into a nonlinear system. The resulting optimization problem, in which the element placement is fixed, is solved using a reduced space successive quadratic programming (rSQP) algorithm. The space is partitioned into range and null spaces. This partitioning is performed by choosing a pivot sequence for an LU factorization with partial pivoting which allows us to detect unstable modes in the DAE system. The system is stabilized without imposing new boundary conditions. The decomposition of the range space can be performed in a single step by exploiting the overall sparsity of the collocation matrix but not its almost block diagonal structure. In order to solve larger problems a new decomposition approach and a new method for constructing the quadratic programming (QP) subproblem are presented in this work. The decomposition of the collocation matrix is now performed element by element, thus reducing the storage requirements and the computational effort. Under this scheme, the unstable modes are considered in each element and a range-space move is constructed sequentially based on decomposition in each element. This new decomposition improves the efficiency of our previous approach and at the same time preserves its stability. The performance of the algorithm is tested on several examples. Finally, some future directions for research are discussed.

  14. Overlapping Community Detection based on Network Decomposition

    NASA Astrophysics Data System (ADS)

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-04-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.

  15. Overlapping Community Detection based on Network Decomposition

    PubMed Central

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-01-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms. PMID:27066904

  16. Overlapping Community Detection based on Network Decomposition.

    PubMed

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-01-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms. PMID:27066904

  17. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    NASA Astrophysics Data System (ADS)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2015-12-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  18. Algorithmic sensor failure detection on passive antenna arrays

    NASA Astrophysics Data System (ADS)

    Chun, Joohwan; Luk, Franklin T.

    1991-12-01

    We present an algorithm that can detect and isolate a single passive antenna failure under the assumption of slowly time varying signal sources. Our failure detection algorithm recursively computes an eigenvalue decomposition of the covariance of the "syndrome" vector. The sensor failure is detected using the largest eigenvalue, and the faulty sensor is located using the corresponding eigenvector. The algorithm can also be used in conjunction with existing singular value decomposition or orthogonal triangularization based recursive antenna array processing methods.

  19. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    SciTech Connect

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  20. Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.

    2001-01-01

    An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.

  1. Erbium hydride decomposition kinetics.

    SciTech Connect

    Ferrizz, Robert Matthew

    2006-11-01

    Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.

  2. Art of spin decomposition

    SciTech Connect

    Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.

    2011-04-01

    We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.

  3. Resolving the sign ambiguity in the singular value decomposition.

    SciTech Connect

    Bro, Rasmus; Acar, Evrim; Kolda, Tamara Gibson

    2007-10-01

    Many modern data analysis methods involve computing a matrix singular value decomposition (SVD) or eigenvalue decomposition (EVD). Principal components analysis is the time-honored example, but more recent applications include latent semantic indexing, hypertext induced topic selection (HITS), clustering, classification, etc. Though the SVD and EVD are well-established and can be computed via state-of-the-art algorithms, it is not commonly mentioned that there is an intrinsic sign indeterminacy that can significantly impact the conclusions and interpretations drawn from their results. Here we provide a solution to the sign ambiguity problem and show how it leads to more sensible solutions.

  4. Direct Sum Decomposition of Groups

    ERIC Educational Resources Information Center

    Thaheem, A. B.

    2005-01-01

    Direct sum decomposition of Abelian groups appears in almost all textbooks on algebra for undergraduate students. This concept plays an important role in group theory. One simple example of this decomposition is obtained by using the kernel and range of a projection map on an Abelian group. The aim in this pedagogical note is to establish a direct…

  5. Robust Face Clustering Via Tensor Decomposition.

    PubMed

    Cao, Xiaochun; Wei, Xingxing; Han, Yahong; Lin, Dongdai

    2015-11-01

    Face clustering is a key component either in image managements or video analysis. Wild human faces vary with the poses, expressions, and illumination changes. All kinds of noises, like block occlusions, random pixel corruptions, and various disguises may also destroy the consistency of faces referring to the same person. This motivates us to develop a robust face clustering algorithm that is less sensitive to these noises. To retain the underlying structured information within facial images, we use tensors to represent faces, and then accomplish the clustering task based on the tensor data. The proposed algorithm is called robust tensor clustering (RTC), which firstly finds a lower-rank approximation of the original tensor data using a L1 norm optimization function. Because L1 norm does not exaggerate the effect of noises compared with L2 norm, the minimization of the L1 norm approximation function makes RTC robust. Then, we compute high-order singular value decomposition of this approximate tensor to obtain the final clustering results. Different from traditional algorithms solving the approximation function with a greedy strategy, we utilize a nongreedy strategy to obtain a better solution. Experiments conducted on the benchmark facial datasets and gait sequences demonstrate that RTC has better performance than the state-of-the-art clustering algorithms and is more robust to noises. PMID:25546869

  6. MAMAP - a new spectrometer system for column-averaged methane and carbon dioxide observations from aircraft: retrieval algorithm and first inversions for point source emission rates

    NASA Astrophysics Data System (ADS)

    Krings, T.; Gerilowski, K.; Buchwitz, M.; Reuter, M.; Tretner, A.; Erzinger, J.; Heinze, D.; Burrows, J. P.; Bovensmann, H.

    2011-04-01

    MAMAP is an airborne passive remote sensing instrument designed for measuring columns of methane (CH4) and carbon dioxide (CO2). The MAMAP instrument consists of two optical grating spectrometers: One in the short wave infrared band (SWIR) at 1590-1690 nm to measure CO2 and CH4 absorptions and another one in the near infrared (NIR) at 757-768 nm to measure O2 absorptions for reference purposes. MAMAP can be operated in both nadir and zenith geometry during the flight. Mounted on an airplane MAMAP can effectively survey areas on regional to local scales with a ground pixel resolution of about 29 m × 33 m for a typical aircraft altitude of 1250 m and a velocity of 200 km h-1. The retrieval precision of the measured column relative to background is typically ≲ 1% (1σ). MAMAP can be used to close the gap between satellite data exhibiting global coverage but with a rather coarse resolution on the one hand and highly accurate in situ measurements with sparse coverage on the other hand. In July 2007 test flights were performed over two coal-fired powerplants operated by Vattenfall Europe Generation AG: Jänschwalde (27.4 Mt CO2 yr-1) and Schwarze Pumpe (11.9 Mt CO2 yr-1), about 100 km southeast of Berlin, Germany. By using two different inversion approaches, one based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data, and another using a simple Gaussian integral method, the emission rates can be determined and compared with emissions as stated by Vattenfall Europe. An extensive error analysis for the retrieval's dry column results (XCO2 and XCH4) and for the two inversion methods has been performed. Both methods - the Gaussian plume model fit and the Gaussian integral method - are capable of delivering reliable estimates for strong point source emission rates, given appropriate flight patterns and detailed knowledge of wind conditions.

  7. Modified K-factor image decomposition for three-dimensional super resolution microscopy

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Tali; Weiss, Aryeh; Meiri, Amihai; Ebeling, Carl G.; Amiel, Aliza; Katz, Hila; Mannasse-Green, Batya; Zalevsky, Zeev

    2016-03-01

    The ability to track single fluorescent particles within a three dimensional (3D) cellular environment can provide valuable insights into cellular processes. In this paper, we present a modified nonlinear image decomposition technique called K-factor that reshapes the 3D point spread function (PSF) of an XYZ image stack into a narrow Gaussian profile. The method increases localization accuracy by ~60% with compare to regular Gaussian fitting, and improves minimal resolvable distance between overlapping PSFs by ~50%. The algorithm was tested both on simulated data and experimentally. This work presets a novel use of the nonlinear image decomposition technique called K-factor that reshapes the three dimensional (3D) point spread function (PSF) of an XYZ image stack into a narrow Gaussian profile. The experimentally obtained PSF of a Z-stack raw data that is acquired by a widefield microscope has a more elaborate shape that is given by the Gibson and Lanni model. This shape increases the computational complexity associated with the localization routine, when used in localization microscopy techniques. Furthermore, due to its nature, this PSF spreads over a larger volume, making the problem of overlapping emitters detection more pronounced. The ability to use Gaussian fitting with high accuracy on 3D data can facilitate the computational complexity, hence reduce the processing time required for the generation of the 3D superresolved image. In addition it allows the detection of overlapping PSFs and reduces the effects of the penetration of out of focus PSFs into in focused PSFs, therefore enables the increase in the activated fluorophore density by ~50%. The algorithm was tested both on simulated data and experimentally, where it yielded an increase in the localization accuracy by ~60% with compare to regular Gaussian fitting, and improved the minimal resolvable distance between overlapping PSFs by ~50%, making it extremely applicable to the field of 3D biomedical

  8. Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.

    PubMed

    Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai

    2016-03-01

    Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization. PMID:27046492

  9. Decomposition in northern Minnesota peatlands

    SciTech Connect

    Farrish, K.W.

    1985-01-01

    Decomposition in peatlands was investigated in northern Minnesota. Four sites, an ombrotrophic raised bog, an ombrotrophic perched bog and two groundwater minerotrophic fens, were studied. Decomposition rates of peat and paper were estimated using mass-loss techniques. Environmental and substrate factors that were most likely to be responsible for limiting decomposition were monitored. Laboratory incubation experiments complemented the field work. Mass-loss over one year in one of the bogs, ranged from 11 percent in the upper 10 cm of hummocks to 1 percent at 60 to 100 cm depth in hollows. Regression analysis of the data for that bog predicted no mass-loss below 87 cm. Decomposition estimates on an area basis were 2720 and 6460 km/ha yr for the two bogs; 17,000 and 5900 kg/ha yr for the two fens. Environmental factors found to limit decomposition in these peatlands were reducing/anaerobic conditions below the water table and cool peat temperatures. Substrate factors found to limit decomposition were low pH, high content of resistant organics such as lignin, and shortages of available N and K. Greater groundwater influence was found to favor decomposition through raising the pH and perhaps by introducing limited amounts of dissolved oxygen.

  10. COMPUTER SIMULATIONS WITH EXPLICIT SOLVENT: Recent Progress in the Thermodynamic Decomposition of Free Energies and in Modeling Electrostatic Effects

    NASA Astrophysics Data System (ADS)

    Levy, Ronald M.; Gallicchio, Emilio

    1998-10-01

    This review focuses on recent progress in two areas in which computer simulations with explicit solvent are being applied: the thermodynamic decomposition of free energies, and modeling electrostatic effects. The computationally intensive nature of these simulations has been an obstacle to the systematic study of many problems in solvation thermodynamics, such as the decomposition of solvation and ligand binding free energies into component enthalpies and entropies. With the revolution in computer power continuing, these problems are ripe for study but require the judicious choice of algorithms and approximations. We provide a critical evaluation of several numerical approaches to the thermodynamic decomposition of free energies and summarize applications in the current literature. Progress in computer simulations with explicit solvent of charge perturbations in biomolecules was slow in the early 1990s because of the widespread use of truncated Coulomb potentials in these simulations, among other factors. Development of the sophisticated technology described in this review to handle the long-range electrostatic interactions has increased the predictive power of these simulations to the point where comparisons between explicit and continuum solvent models can reveal differences that have their true physical origin in the inherent molecularity of the surrounding medium.

  11. Terrestrial laser scanning and a degenerated cylinder model to determine gross morphological change of cadavers under conditions of natural decomposition.

    PubMed

    Zhang, Xiao; Glennie, Craig L; Bucheli, Sibyl R; Lindgren, Natalie K; Lynne, Aaron M

    2014-08-01

    Decomposition can be a highly variable process with stages that are difficult to quantify. Using high accuracy terrestrial laser scanning a repeated three-dimensional (3D) documentation of volumetric changes of a human body during early decomposition is recorded. To determine temporal volumetric variations as well as 3D distribution of the changed locations in the body over time, this paper introduces the use of multiple degenerated cylinder models to provide a reasonable approximation of body parts against which 3D change can be measured and visualized. An iterative closest point algorithm is used for 3D registration, and a method for determining volumetric change is presented. Comparison of the laser scanning estimates of volumetric change shows good agreement with repeated in-situ measurements of abdomen and limb circumference that were taken diurnally. The 3D visualizations of volumetric changes demonstrate that bloat is a process with a beginning, middle, and end rather than a state of presence or absence. Additionally, the 3D visualizations show conclusively that cadaver bloat is not isolated to the abdominal cavity, but also occurs in the limbs. Detailed quantification of the bloat stage of decay has the potential to alter how the beginning and end of bloat are determined by researchers and can provide further insight into the effects of the ecosystem on decomposition. PMID:24866865

  12. Soil Moisture Estimation under Vegetation Applying Polarimetric Decomposition Techniques

    NASA Astrophysics Data System (ADS)

    Jagdhuber, T.; Schön, H.; Hajnsek, I.; Papathanassiou, K. P.

    2009-04-01

    Polarimetric decomposition techniques and inversion algorithms are developed and applied on the OPAQUE data set acquired in spring 2007 to investigate their potential and limitations for soil moisture estimation. A three component model-based decomposition is used together with an eigenvalue decomposition in a combined approach to invert for soil moisture over bare and vegetated soils at L-band. The applied approach indicates a feasible capability to invert soil moisture after decomposing volume and ground scattering components over agricultural land surfaces. But there are still deficiencies in modeling the volume disturbance. The results show a root mean square error below 8.5vol.-% for the winter crop fields (winter wheat, winter triticale and winter barley) and below 11.5Vol-% for the summer crop field (summer barley) whereas all fields have a distinct volume layer of 55-85cm height.

  13. Monte Carlo Simulations for Spinodal Decomposition

    NASA Astrophysics Data System (ADS)

    Sander, Evelyn; Wanner, Thomas

    1999-06-01

    This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, we are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u 0≡ μ in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u 0. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, we numerically compare these two mathematical approaches. In fact, we are able to synthesize the understanding we gain from our numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, we can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. Our approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter ɛ of the governing equation. We give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. We observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. We explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. We also describe the dynamics of these exceptional solutions.

  14. Monte Carlo simulations for spinodal decomposition

    SciTech Connect

    Sander, E.; Wanner, T.

    1999-06-01

    This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, the authors are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u{sub 0} {equivalent_to} {mu} in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u{sub 0}. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, the authors numerically compare these two mathematical approaches. In fact, they are able to synthesize the understanding they gain from the numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, they can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. The approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter {var_epsilon} of the governing equation. The authors give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. They observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. They explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. They also describe the dynamics of these exceptional solutions.

  15. Perfluoropolyalkylether decomposition on catalytic aluminas

    NASA Technical Reports Server (NTRS)

    Morales, Wilfredo

    1994-01-01

    The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25.

  16. A structure-preserving method for the quaternion LU decomposition in quaternionic quantum theory

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; Ma, Wenhao

    2013-09-01

    In this paper, for the first time, the structure-preserving Gauss transformation is defined. Then by means of its real representation matrix, we present a novel structure-preserving algorithm for the LU decomposition of a quaternion matrix. Numerical experiments show that the structure-preserving algorithm is better than that in the newest quaternion toolbox for matlab (QTFM).

  17. Polar decomposition for attitude determination from vector observations

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1993-01-01

    This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.

  18. Signed Decomposition Method for Scalar Multiplication in Elliptic Curve Cryptography

    NASA Astrophysics Data System (ADS)

    Said, M. R. M.; Mohamed, M. A.; Atan, K. A. Mohd; Zulkarnain, Z. Ahmad

    2010-11-01

    Addition chain is the solution to computability constraint of the problematic large number arithmetic. In elliptic curve cryptography, a point arithmetic on elliptic curve can be reduced to repetitive addition and doubling operations. Based on this idea, various methods were proposed, lately a decomposition method based on prime decomposition was put forward. This method uses a pre-generated set of rules to calculate an addition chain for n. Though the method shows it own advantage over others in some cases, but some improvements is still avail. We develop an enhancement version called signed decomposition method which takes rule from decomposition method as an input. We also generalize the idea of a prime rule to an integer rule. An improvement is done to the original add rule in decomposition method by allowing subtraction operation to terms. In so doing, we optimize the original form of add rule. The result shows not only an improvement over decomposition method but also become an all time superior compare to preceeding methods. Furthermore, having secret key in a form of rule will put up extra security to the message under communication.

  19. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  20. Lignocellulose decomposition by microbial secretions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Carbon storage in terrestrial ecosystems is contingent upon the natural resistance of plant cell wall polymers to rapid biological degradation. Nevertheless, certain microorganisms have evolved remarkable means to overcome this natural resistance. Lignocellulose decomposition by microorganisms com...

  1. An Iterative Reweighted Method for Tucker Decomposition of Incomplete Tensors

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Li, Hongbin; Zeng, Bing

    2016-09-01

    We consider the problem of low-rank decomposition of incomplete multiway tensors. Since many real-world data lie on an intrinsically low dimensional subspace, tensor low-rank decomposition with missing entries has applications in many data analysis problems such as recommender systems and image inpainting. In this paper, we focus on Tucker decomposition which represents an Nth-order tensor in terms of N factor matrices and a core tensor via multilinear operations. To exploit the underlying multilinear low-rank structure in high-dimensional datasets, we propose a group-based log-sum penalty functional to place structural sparsity over the core tensor, which leads to a compact representation with smallest core tensor. The method for Tucker decomposition is developed by iteratively minimizing a surrogate function that majorizes the original objective function, which results in an iterative reweighted process. In addition, to reduce the computational complexity, an over-relaxed monotone fast iterative shrinkage-thresholding technique is adapted and embedded in the iterative reweighted process. The proposed method is able to determine the model complexity (i.e. multilinear rank) in an automatic way. Simulation results show that the proposed algorithm offers competitive performance compared with other existing algorithms.

  2. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  3. Decomposition and coordination of large-scale operations optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Ruoyu

    Nowadays, highly integrated manufacturing has resulted in more and more large-scale industrial operations. As one of the most effective strategies to ensure high-level operations in modern industry, large-scale engineering optimization has garnered a great amount of interest from academic scholars and industrial practitioners. Large-scale optimization problems frequently occur in industrial applications, and many of them naturally present special structure or can be transformed to taking special structure. Some decomposition and coordination methods have the potential to solve these problems at a reasonable speed. This thesis focuses on three classes of large-scale optimization problems: linear programming, quadratic programming, and mixed-integer programming problems. The main contributions include the design of structural complexity analysis for investigating scaling behavior and computational efficiency of decomposition strategies, novel coordination techniques and algorithms to improve the convergence behavior of decomposition and coordination methods, as well as the development of a decentralized optimization framework which embeds the decomposition strategies in a distributed computing environment. The complexity study can provide fundamental guidelines to practical applications of the decomposition and coordination methods. In this thesis, several case studies imply the viability of the proposed decentralized optimization techniques for real industrial applications. A pulp mill benchmark problem is used to investigate the applicability of the LP/QP decentralized optimization strategies, while a truck allocation problem in the decision support of mining operations is used to study the MILP decentralized optimization strategies.

  4. Nontraditional tensor decompositions and applications.

    SciTech Connect

    Bader, Brett William

    2010-07-01

    This presentation will discuss two tensor decompositions that are not as well known as PARAFAC (parallel factors) and Tucker, but have proven useful in informatics applications. Three-way DEDICOM (decomposition into directional components) is an algebraic model for the analysis of 3-way arrays with nonsymmetric slices. PARAFAC2 is a related model that is less constrained than PARAFAC and allows for different objects in one mode. Applications of both models to informatics problems will be shown.

  5. Decomposition of indwelling EMG signals

    PubMed Central

    Nawab, S. Hamid; Wotiz, Robert P.; De Luca, Carlo J.

    2008-01-01

    Decomposition of indwelling electromyographic (EMG) signals is challenging in view of the complex and often unpredictable behaviors and interactions of the action potential trains of different motor units that constitute the indwelling EMG signal. These phenomena create a myriad of problem situations that a decomposition technique needs to address to attain completeness and accuracy levels required for various scientific and clinical applications. Starting with the maximum a posteriori probability classifier adapted from the original precision decomposition system (PD I) of LeFever and De Luca (25, 26), an artificial intelligence approach has been used to develop a multiclassifier system (PD II) for addressing some of the experimentally identified problem situations. On a database of indwelling EMG signals reflecting such conditions, the fully automatic PD II system is found to achieve a decomposition accuracy of 86.0% despite the fact that its results include low-amplitude action potential trains that are not decomposable at all via systems such as PD I. Accuracy was established by comparing the decompositions of indwelling EMG signals obtained from two sensors. At the end of the automatic PD II decomposition procedure, the accuracy may be enhanced to nearly 100% via an interactive editor, a particularly significant fact for the previously indecomposable trains. PMID:18483170

  6. ID Image Characterization by Entropic Biometric Decomposition

    NASA Astrophysics Data System (ADS)

    Smoaca, Andreea; Coltuc, Daniela; Fournel, Thierry

    2011-03-01

    The paper proposes a statistical-based biometric decomposition for ID image recognition robust to a series of non malicious attacks generated by print/scan operations. Our goal is to label the single face expression by a signature, which is almost invariant to low filtering, noise addition and geometric attacks. The method is based on Independent Component Analysis (ICA) in a configuration which will allow a decomposition into some face characteristics. In this configuration known in literature as Architecture I, the most important coefficients issued from ICA are selected by looking for the independent components with maximum local entropy. A biometric label of fixed length is associated to any ID image to be enrolled, after projection on the learned basis, uniform quantization of the obtained coefficients and binary encoding. Two parameters were tuned: the number of quantization levels and the number of face characteristics. The latter one was modified, either by discarding coefficients after Principal Component Analysis in the beginning of FastICA algorithm, or by selecting the most prominent biometric features by applying an entropic criterion. The suggested method inherits the robustness of a global approach.

  7. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  8. A New Method for Spectral Decomposition Using a Bilinear Bayesian Approach

    NASA Astrophysics Data System (ADS)

    Ochs, M. F.; Stoyanova, R. S.; Arias-Mendoza, F.; Brown, T. R.

    1999-03-01

    A frequent problem in analysis is the need to find two matrices, closely related to the underlying measurement process, which when multiplied together reproduce the matrix of data points. Such problems arise throughout science, for example, in imaging where both the calibration of the sensor and the true scene may be unknown and in localized spectroscopy where multiple components may be present in varying amounts in any spectrum. Since both matrices are unknown, such a decomposition is a bilinear problem. We report here a solution to this problem for the case in which the decomposition results in matrices with elements drawn from positive additive distributions. We demonstrate the power of the methodology on chemical shift images (CSI). The new method, Bayesian spectral decomposition (BSD), reduces the CSI data to a small number of basis spectra together with their localized amplitudes. We apply this new algorithm to a19F nonlocalized study of the catabolism of 5-fluorouracil in human liver,31P CSI studies of a human head and calf muscle, and simulations which show its strengths and limitations. In all cases, the dataset, viewed as a matrix with rows containing the individual NMR spectra, results from the multiplication of a matrix of generally nonorthogonal basis spectra (the spectral matrix) by a matrix of the amplitudes of each basis spectrum in the the individual voxels (the amplitude matrix). The results show that BSD can simultaneously determine both the basis spectra and their distribution. In principle, BSD should solve this bilinear problem for any dataset which results from multiplication of matrices representing positive additive distributions if the data overdetermine the solutions.

  9. Adaptive domain decomposition methods for advection-diffusion problems

    SciTech Connect

    Carlenzoli, C.; Quarteroni, A.

    1995-12-31

    Domain decomposition methods can perform poorly on advection-diffusion equations if diffusion is dominated by advection. Indeed, the hyperpolic part of the equations could affect the behavior of iterative schemes among subdomains slowing down dramatically their rate of convergence. Taking into account the direction of the characteristic lines we introduce suitable adaptive algorithms which are stable with respect to the magnitude of the convective field in the equations and very effective on bear boundary value problems.

  10. A domain decomposition scheme for Eulerian shock physics codes

    SciTech Connect

    Bell, R.L.; Hertel, E.S. Jr.

    1994-08-01

    A new algorithm which allows for complex domain decomposition in Eulerian codes was developed at Sandia National Laboratories. This new feature allows a user to customize the zoning for each portion of a calculation and to refine volumes of the computational space of particular interest This option is available in one, two, and three dimensions. The new technique will be described in detail and several examples of the effectiveness of this technique will also be discussed.

  11. Refining signal decomposition for GRETINA detectors

    NASA Astrophysics Data System (ADS)

    Prasher, V. S.; Campbell, C. M.; Cromaz, M.; Crawford, H. L.; Wiens, A.; Lee, I. Y.; Macchiavelli, A. O.; Lister; Merchan, E.; Chowdhury, P.; Radford, D. C.

    2013-04-01

    The reconstruction of the original direction and energy of gamma rays through locating their interaction points in solid state detectors is a crucial evolving technology for nuclear physics, space science and homeland security. New arrays AGATA and GRETINA have been built for nuclear science based on highly segmented germanium crystals. The signal decomposition process fits the observed waveform from each crystal segment with a linear combination of pre-calculated basis signals. This process occurs on an event-by-event basis in real time to extract the position and energy of γ-ray interactions. The methodology for generating a basis of pulse shapes, varying according to the position of the charge generating interactions, is in place. Improvements in signal decomposition can be realized by better modeling the crystals. Specifically, a better understanding of the true impurity distributions, internal electric fields, and charge mobilities will lead to more reliable bases, more precise definition of the interaction points, and hence more reliable tracking. In this presentation we will cover the current state-of-the-art for basis generation and then discuss the sensitivity of the predicted pulse shapes when varying some key parameters.

  12. Gaussian Decomposition of Laser Altimeter Waveforms

    NASA Technical Reports Server (NTRS)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  13. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  14. Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.

    ERIC Educational Resources Information Center

    Alexopoulos, John; Abraham, Paul

    2001-01-01

    Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…

  15. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  16. Singular value decomposition for collaborative filtering on a GPU

    NASA Astrophysics Data System (ADS)

    Kato, Kimikazu; Hosino, Tikara

    2010-06-01

    A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.

  17. On the sequences ri, si, ti ∈ ℤ related to extended Euclidean algorithm and continued fractions

    NASA Astrophysics Data System (ADS)

    Muhammad, Khairun Nisak; Kamarulhaili, Hailiza

    2016-06-01

    The extended Euclidean Algorithm is a practical technique used in many cryptographic applications, where it computes the sequences ri, si, ti ∈ ℤ that always satisfy ri = si a+ tib. The integer ri is the remainder in the ith sequences. The sequences si and ti arising from the extended Euclidean algorithm are equal, up to sign, to the convergents of the continued fraction expansion of a/b. The values of (ri, si, ti) satisfy various properties which are used to solve the shortest vector problem in representing point multiplications in elliptic curves cryptography, namely the GLV (Gallant, Lambert & Vanstone) integer decomposition method and the ISD (integer sub decomposition) method. This paper is to extend the proof for each of the existing properties on (ri, si, ti). We also generate new properties which are relevant to the sequences ri, si, ti ∈ ℤ. The concepts of Euclidean algorithm, extended Euclidean algorithm and continued fractions are intertwined and the properties related to these concepts are proved. These properties together with the existing properties of the sequence (ri, si, ti) are regarded as part and parcel of the building blocks of a new generation of an efficient cryptographic protocol.

  18. TU-F-18A-02: Iterative Image-Domain Decomposition for Dual-Energy CT

    SciTech Connect

    Niu, T; Dong, X; Petrongolo, M; Zhu, L

    2014-06-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative

  19. Ocean Models and Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Salas-de-Leon, D. A.

    2007-05-01

    The increasing computational developments and the better understanding of mathematical and physical systems resulted in an increasing number of ocean models. Long time ago, modelers were like a secret organization and recognize each other by using secret codes and languages that only a select group of people was able to recognize and understand. The access to computational systems was reduced, on one hand equipment and the using time of computers were expensive and restricted, and on the other hand, they required an advance computational languages that not everybody wanted to learn. Now a days most college freshman own a personal computer (PC or laptop), and/or have access to more sophisticated computational systems than those available for research in the early 80's. The resource availability resulted in a mayor access to all kind models. Today computer speed and time and the algorithms does not seem to be a problem, even though some models take days to run in small computational systems. Almost every oceanographic institution has their own model, what is more, in the same institution from one office to the next there are different models for the same phenomena, developed by different research member, the results does not differ substantially since the equations are the same, and the solving algorithms are similar. The algorithms and the grids, constructed with algorithms, can be found in text books and/or over the internet. Every year more sophisticated models are constructed. The Proper Orthogonal Decomposition is a technique that allows the reduction of the number of variables to solve keeping the model properties, for which it can be a very useful tool in diminishing the processes that have to be solved using "small" computational systems, making sophisticated models available for a greater community.

  20. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  1. Reprocessed polylactide: studies of thermo-oxidative decomposition.

    PubMed

    Badia, J D; Santonja-Blasco, L; Martínez-Felipe, A; Ribes-Greus, A

    2012-06-01

    The combustion process of virgin and reprocessed polylactide (PLA) was simulated by multi-rate linear non-isothermal thermogravimetric experiments under O(2). A complete methodology that accounted on the thermal stability and emission of gases was thoroughly developed. A new model, Thermal Decomposition Behavior, and novel parameters, the Zero-Decomposition Temperatures, were used to test the thermal stability of the materials under any linear heating rate. The release of gases was monitored by Evolved Gas Analysis with in-line FT-IR analysis. In addition, a kinetic analysis methodology that accounted for variable activation parameters showed that the decomposition process could be driven by the formation of bubbles in the melt. It was found that the combustion technologies for virgin PLA could be transferred for the energetic valorization of its recyclates. Combustion was pointed out as appropriate for the energetic valorization of PLA submitted to more than three successive reprocessing cycles. PMID:22481003

  2. Efficient morse decompositions of vector fields.

    PubMed

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets. PMID:18467759

  3. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE PAGESBeta

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; Shephard, Mark S.

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  4. Thermal decomposition products of butyraldehyde

    NASA Astrophysics Data System (ADS)

    Hatten, Courtney D.; Kaskey, Kevin R.; Warner, Brian J.; Wright, Emily M.; McCunn, Laura R.

    2013-12-01

    The thermal decomposition of gas-phase butyraldehyde, CH3CH2CH2CHO, was studied in the 1300-1600 K range with a hyperthermal nozzle. Products were identified via matrix-isolation Fourier transform infrared spectroscopy and photoionization mass spectrometry in separate experiments. There are at least six major initial reactions contributing to the decomposition of butyraldehyde: a radical decomposition channel leading to propyl radical + CO + H; molecular elimination to form H2 + ethylketene; a keto-enol tautomerism followed by elimination of H2O producing 1-butyne; an intramolecular hydrogen shift and elimination producing vinyl alcohol and ethylene, a β-C-C bond scission yielding ethyl and vinoxy radicals; and a γ-C-C bond scission yielding methyl and CH2CH2CHO radicals. The first three reactions are analogous to those observed in the thermal decomposition of acetaldehyde, but the latter three reactions are made possible by the longer alkyl chain structure of butyraldehyde. The products identified following thermal decomposition of butyraldehyde are CO, HCO, CH3CH2CH2, CH3CH2CH=C=O, H2O, CH3CH2C≡CH, CH2CH2, CH2=CHOH, CH2CHO, CH3, HC≡CH, CH2CCH, CH3C≡CH, CH3CH=CH2, H2C=C=O, CH3CH2CH3, CH2=CHCHO, C4H2, C4H4, and C4H8. The first ten products listed are direct products of the six reactions listed above. The remaining products can be attributed to further decomposition reactions or bimolecular reactions in the nozzle.

  5. Updating the singular value decomposition

    NASA Astrophysics Data System (ADS)

    Davies, Philip I.; Smith, M. I. Matthew I.

    2004-09-01

    The spectral decomposition of a symmetric matrix A with small off-diagonal and distinct diagonal elements can be approximated using a direct scheme of R. Davies and Modi (Linear Algebra Appl. 77 (1986) 61). In this paper a generalization of this method for computing the singular value decomposition of close-to-diagonal is presented. When A has repeated or "close" singular values it is possible to apply the direct method to split the problem in two with one part containing the well-separated singular values and one requiring the computation of the "close" singular values.

  6. Accelerating decomposition of light field video for compressive multi-layer display.

    PubMed

    Cao, Xuan; Geng, Zheng; Li, Tuotuo; Zhang, Mei; Zhang, Zhaoxing

    2015-12-28

    Compressive light field display based on multi-layer LCDs is becoming a popular solution for 3D display. Decomposing light field into layer images is the most challenging task. Iterative algorithm is an effective solver for this high-dimensional decomposition problem. Existing algorithms, however, iterate from random initial values. As such, significant computation time is required due to the deviation between random initial estimate and target values. Real-time 3D display at video rate is difficult based on existing algorithms. In this paper, we present a new algorithm to provide better initial values and accelerate decomposition of light field video. We utilize internal coherence of single light field frame to transfer the ignorance-to-target to a much lower resolution level. In addition, we explored external coherence for further accelerating light field video and achieved 5.91 times speed improvement. We built a prototype and developed parallel algorithm based on CUDA. PMID:26832058

  7. Hybridization of decomposition and local search for multiobjective optimization.

    PubMed

    Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto

    2014-10-01

    Combining ideas from evolutionary algorithms, decomposition approaches, and Pareto local search, this paper suggests a simple yet efficient memetic algorithm for combinatorial multiobjective optimization problems: memetic algorithm based on decomposition (MOMAD). It decomposes a combinatorial multiobjective problem into a number of single objective optimization problems using an aggregation method. MOMAD evolves three populations: 1) population P(L) for recording the current solution to each subproblem; 2) population P(P) for storing starting solutions for Pareto local search; and 3) an external population P(E) for maintaining all the nondominated solutions found so far during the search. A problem-specific single objective heuristic can be applied to these subproblems to initialize the three populations. At each generation, a Pareto local search method is first applied to search a neighborhood of each solution in P(P) to update P(L) and P(E). Then a single objective local search is applied to each perturbed solution in P(L) for improving P(L) and P(E), and reinitializing P(P). The procedure is repeated until a stopping condition is met. MOMAD provides a generic hybrid multiobjective algorithmic framework in which problem specific knowledge, well developed single objective local search and heuristics and Pareto local search methods can be hybridized. It is a population based iterative method and thus an anytime algorithm. Extensive experiments have been conducted in this paper to study MOMAD and compare it with some other state-of-the-art algorithms on the multiobjective traveling salesman problem and the multiobjective knapsack problem. The experimental results show that our proposed algorithm outperforms or performs similarly to the best so far heuristics on these two problems. PMID:25222724

  8. LP and NLP decomposition without a master problem

    SciTech Connect

    Fuller, D.; Lan, B.

    1994-12-31

    We describe a new algorithm for decomposition of linear programs and a class of convex nonlinear programs, together with theoretical properties and some test results. Its most striking feature is the absence of a master problem; the subproblems pass primal and dual proposals directly to one another. The algorithm is defined for multi-stage LPs or NLPs, in which the constraints link the current stage`s variables to earlier stages` variables. This problem class is general enough to include many problem structures that do not immediately suggest stages, such as block diagonal problems. The basic algorithmis derived for two-stage problems and extended to more than two stages through nested decomposition. The main theoretical result assures convergence, to within any preset tolerance of the optimal value, in a finite number of iterations. This asymptotic convergence result contrasts with the results of limited tests on LPs, in which the optimal solution is apparently found exactly, i.e., to machine accuracy, in a small number of iterations. The tests further suggest that for LPs, the new algorithm is faster than the simplex method applied to the whole problem, as long as the stages are linked loosely; that the speedup over the simpex method improves as the number of stages increases; and that the algorithm is more reliable than nested Dantzig-Wolfe or Benders` methods in its improvement over the simplex method.

  9. Cadaver decomposition in terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Carter, David O.; Yellowlees, David; Tibbett, Mark

    2007-01-01

    A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon decomposition. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver decomposition remains a neglected microsere. Here we review the processes associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver decomposition can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver decomposition island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems.

  10. An analysis of scatter decomposition

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1990-01-01

    A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.

  11. The ecology of carrion decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Carrion, or the remains of dead animals, is something that most people would like to avoid. It is visually unpleasant, emits foul odors, and may be the source of numerous pathogens. Decomposition of carrion, however, provides a unique opportunity for scientists to investigate how nutrients cycle t...

  12. How Is Morphological Decomposition Achieved?

    ERIC Educational Resources Information Center

    Libben, Gary

    1994-01-01

    Two experiments investigated morphological decomposition in ambiguous novel compounds such as "busheater," which can be parsed as either "bus-heater" or "bush-heater." It was found that subjects' parsing choices for such words are influenced by orthographic constraints but that these constraints do not operate prelexically. (33 references) (MDM)

  13. Microbial interactions during carrion decomposition

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This addresses the microbial ecology of carrion decomposition in the age of metagenomics. It describes what is known about the microbial communities on carrion, including a brief synopsis about the communities on other organic matter sources. It provides a description of studies using state-of-the...

  14. Empirical mode decomposition as a time-varying multirate signal processing system

    NASA Astrophysics Data System (ADS)

    Yang, Yanli

    2016-08-01

    Empirical mode decomposition (EMD) can adaptively split composite signals into narrow subbands termed intrinsic mode functions (IMFs). Although an analytical expression of IMFs extracted by EMD from signals is introduced in Yang et al. (2013) [1], it is only used for the case of extrema spaced uniformly. In this paper, the EMD algorithm is analyzed from digital signal processing perspective for the case of extrema spaced nonuniformly. Firstly, the extrema extraction is represented by a time-varying extrema decimator. The nonuniform extrema extraction is analyzed through modeling the time-varying extrema decimation at a fixed time point as a time-invariant decimation. Secondly, by using the impulse/summation approach, spline interpolation for knots spaced nonuniformly is shown as two basic operations, time-varying interpolation and filtering by a time-varying spline filter. Thirdly, envelopes of signals are written as the output of the time-varying spline filter. An expression of envelopes of signals in both time and frequency domain is presented. The EMD algorithm is then described as a time-varying multirate signal processing system. Finally, an equation to model IMFs is derived by using a matrix formulation in time domain for the general case of extrema spaced nonuniformly.

  15. Parametrized mode decomposition for bifurcation analysis applied to a thermo-acoustically oscillating flame

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Schmid, Peter; Richecoeur, Franck; Durox, Daniel

    2014-11-01

    Thermo-acoustic systems belong to a class of dynamical systems that are governed by multiple parameters. Changing these parameters alters the response of the dynamical system and causes it to bifurcate. Due to their many applications and potential impact on a variety of combustion systems, there is great interest in devising control strategies to weaken or suppress thermo-acoustic instabilities. However, the system dynamics have to be available in reduced-order form to allow the design of such controllers and their operation in real-time. As the dominant modes and their respective frequencies change with varying the system parameters, the dynamical system needs to be analyzed separately for a set of fixed parameter values, before the dynamics can be linked in parameter-space. This two-step process is not only cumbersome, but also ambiguous when applied to systems operating close to a bifurcation point. Here we propose a parametrized decomposition algorithm which is capable of analyzing dynamical systems as they go through a bifurcation, extracting the dominant modes of the pre- and post-bifurcation regime. The algorithm is applied to a thermo-acoustically oscillating flame and to pressure signals from experiments. A few selected mode are capable of reproducing the dynamics.

  16. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  17. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  18. Spectral decomposition of the aerodynamic noise generated by rotating sources

    NASA Astrophysics Data System (ADS)

    Bongiovì, Alessandro; Cattanei, Andrea

    2011-01-01

    A method is posed for separating the noise emitted by an aerodynamic source from propagation effects using spectral decomposition. This technique is applied to the power spectra of a fan measured at several rotational speeds. Although it has been conceived for rotating sources as turbomachinery rotors, the method may be easily applied to low speed stationary sources such as jets and flows in stators and about isolated airfoils. Based on the similarity theory, a clear description of the structure of the power spectrum of the received noise is given and the effect of rotational speed variations is considered as a means to obtain a data set suitable to perform the spectral decomposition. The problem is analyzed in order to clarify possibilities and limitations of the method and then an algorithm is presented which is based on the solution of the derived equations. Particular care is devoted to both the numerical details and the operative aspects. The validation of the algorithm is performed by means of numerically generated input data. Next, in order to verify the ability of the method in separating scattered from emitted sound, an automotive cooling fan has been tested in the DIMSET hemi-anechoic room in a free-field configuration and with a shielded microphone. These two apparently distinct spectra collapse to within less than 2 dB after the spectral decomposition has been performed. The tests prove the ability of the method despite the modest quantity of input data.

  19. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  20. Improvements to the stand and hit algorithm

    SciTech Connect

    Boneh, A.; Boneh, S.; Caron, R.; Jibrin, S.

    1994-12-31

    The stand and hit algorithm is a probabilistic algorithm for detecting necessary constraints. The algorithm stands at a point in the feasible region and hits constraints by moving towards the boundary along randomly generated directions. In this talk we discuss methods for choosing the standing point. As well, we present the undetected first rule for determining the hit constraints.

  1. Thermal decomposition of high-nitrogen energetic compounds: TAGzT and GUzT

    NASA Astrophysics Data System (ADS)

    Hayden, Heather F.

    The U.S. Navy is exploring high-nitrogen compounds as burning-rate additives to meet the growing demands of future high-performance gun systems. Two high-nitrogen compounds investigated as potential burning-rate additives are bis(triaminoguanidinium) 5,5-azobitetrazolate (TAGzT) and bis(guanidinium) 5,5'-azobitetrazolate (GUzT). Small-scale tests showed that formulations containing TAGzT exhibit significant increases in the burning rates of RDX-based gun propellants. However, when GUzT, a similarly structured molecule was incorporated into the formulation, there was essentially no effect on the burning rate of the propellant. Through the use of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and Fourier-Transform ion cyclotron resonance (FTICR) mass spectrometry methods, an investigation of the underlying chemical and physical processes that control the thermal decomposition behavior of TAGzT and GUzT alone and in the presence of RDX, was conducted. The objective was to determine why GUzT is not as good a burning-rate enhancer in RDX-based gun propellants as compared to TAGzT. The results show that TAGzT is an effective burning-rate modifier in the presence of RDX because the decomposition of TAGzT alters the initial stages of the decomposition of RDX. Hydrazine, formed in the decomposition of TAGzT, reacts faster with RDX than RDX can decompose itself. The reactions occur at temperatures below the melting point of RDX and thus the TAGzT decomposition products react with RDX in the gas phase. Although there is no hydrazine formed in the decomposition of GUzT, amines formed in the decomposition of GUzT react with aldehydes, formed in the decomposition of RDX, resulting in an increased reaction rate of RDX in the presence of GUzT. However, GUzT is not an effective burning-rate modifier because its decomposition does not alter the initial gas-phase decomposition of RDX. The decomposition of GUzT occurs at temperatures above the melting point

  2. Investigating hydrogel dosimeter decomposition by chemical methods

    NASA Astrophysics Data System (ADS)

    Jordan, Kevin

    2015-01-01

    The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products.

  3. Renewable energy in electric utility capacity planning: a decomposition approach with application to a Mexican utility

    SciTech Connect

    Staschus, K.

    1985-01-01

    In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms are reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.

  4. Application of the Subgroup Decomposition Method (SDM) for Reactor Simulation

    NASA Astrophysics Data System (ADS)

    Roskoff, Nathan; Walters, William; Haghighat, Alireza

    2016-02-01

    The performance of the TITAN-SDM algorithm for solving a reactor pressure vessel dosimetry problem is evaluated. Douglass and Rahnema recently developed the he subgroup decomposition method (SDM); a methodology which directly couples a consistent coarse-group transport calculation with a set of "decomposition sweeps" to provide a fine-group flux spectrum. The SDM has been implemented into the TITAN three-dimensional transport code and has been shown to accurately solve core criticality problems while significantly reducing computation time. This paper addresses the use of SDM for fixed-source problems. The VENUS-2 dosimetry benchmark problem is selected with an emphasis on fast neutron analysis; therefore, material cross sections are generated from the BUGLE-96 library considering neutron energies greater than 0.1 MeV. The accuracy and efficiency of TITAN-SDM is evaluated by comparison with a standard TITAN multigroup calculation.

  5. Generalization of the Cartan and Iwasawa Decompositions to SL2(k)

    NASA Astrophysics Data System (ADS)

    Sutherland, Amanda Kay

    The Cartan and Iwasawa decompositions of real reductive groups were developed over 100 years ago and have been extensively studied. They play a fundamental role in the representation theory of the groups and their corresponding symmetric spaces. These decompositions are defined by an involution with a compact fixed-point group, called a Cartan involution. Removing the requirement of having a Cartan involution or being defined over the real numbers causes this decomposition to break down. For an arbitrary involution, one can consider similar decompositions over other fields. We offer a generalization of the Cartan and Iwasawa decompositions for the algebraic group G = SL2(k) over an arbitrary field k and a general involution. Additionally, we give a detailed analysis of the structure of the symmetric and extend symmetric spaces over any field and defined by a general involution.

  6. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  7. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  8. Numerically stable Jacobi array for parallel singular value decomposition (SVD) updating

    NASA Astrophysics Data System (ADS)

    Vanpoucke, Filiep J.; Moonen, Marc; Deprettere, Ed F. A.

    1994-10-01

    A novel algorithm is presented for updating the singular value decomposition in parallel. It is an improvement upon an earlier developed Jacobi-type SVD updating algorithm, where now the exact orthogonality of a certain matrix is guaranteed by means of a minimal factorization in terms of angles. Its orthogonality is known to be crucial for the numerical stability of the overall algorithm. The factored approach leads to a triangular array of rotation cells, implementing an orthogonal matrix-vector multiplication, and a novel array for SVD updating. Both arrays can be built up of CORDIC processors since the algorithms make exclusive use of orthogonal planar transformations.

  9. Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation

    DOE PAGESBeta

    Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir

    2016-05-01

    We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.

  10. Domain decomposition solvers for PDEs : some basics, practical tools, and new developments.

    SciTech Connect

    Dohrmann, Clark R.

    2010-11-01

    The first part of this talk provides a basic introduction to the building blocks of domain decomposition solvers. Specific details are given for both the classical overlapping Schwarz (OS) algorithm and a recent iterative substructuring (IS) approach called balancing domain decomposition by constraints (BDDC). A more recent hybrid OS-IS approach is also described. The success of domain decomposition solvers depends critically on the coarse space. Similarities and differences between the coarse spaces for OS and BDDC approaches are discussed, along with how they can be obtained from discrete harmonic extensions. Connections are also made between coarse spaces and multiscale modeling approaches from computational mechanics. As a specific example, details are provided on constructing coarse spaces for incompressible fluid problems. The next part of the talk deals with a variety of implementation details for domain decomposition solvers. These include mesh partitioning options, local and global solver options, reducing the coarse space dimension, dealing with constraint equations, residual weighting to accelerate the convergence of OS methods, and recycling of Krylov spaces to efficiently solve problems with multiple right hand sides. Some potential bottlenecks and remedies for domain decomposition solvers are also discussed. The final part of the talk concerns some recent theoretical advances, new algorithms, and open questions in the analysis of domain decomposition solvers. The focus will be primarily on the work of the speaker and his colleagues on elasticity, fluid mechanics, problems in H(curl), and the analysis of subdomains with irregular boundaries.

  11. Thermal Decomposition Mechanism of Butyraldehyde

    NASA Astrophysics Data System (ADS)

    Hatten, Courtney D.; Warner, Brian; Wright, Emily; Kaskey, Kevin; McCunn, Laura R.

    2013-06-01

    The thermal decomposition of butyraldehyde, CH_3CH_2CH_2C(O)H, has been studied in a resistively heated SiC tubular reactor. Products of pyrolysis were identified via matrix-isolation FTIR spectroscopy and photoionization mass spectrometry in separate experiments. Carbon monoxide, ethene, acetylene, water and ethylketene were among the products detected. To unravel the mechanism of decomposition, pyrolysis of a partially deuterated sample of butyraldehyde was studied. Also, the concentration of butyraldehyde in the carrier gas was varied in experiments to determine the presence of bimolecular reactions. The results of these experiments can be compared to the dissociation pathways observed in similar aldehydes and are relevant to the processing of biomass, foods, and tobacco.

  12. Thermal decomposition of isooctyl nitrate

    SciTech Connect

    Pritchard, H.O.

    1989-03-01

    The diesel ignition improver DII-3, made by Ethyl Corporation, also known as isooctyl nitrate, is a mixture whose principal constituent (about 95%) is 2-ethyl hexyl nitrate. This note describes an investigation of the thermal decomposition that is not exhaustive, but that is intended to provide sufficient information on the rate and the mechanism so as to make possible the educated guesses needed for modeling the effect of isooctyl nitrate on the diesel ignition process. As is the case with other alkyl nitrates, the decomposition of the neat material is a complex one giving a complicated pressure versus time curve, unsuitable for a quick derivation of the rate constant. However, in the presence of toluene, whose intended purpose is to trap reactive free radicals and thereby simplify the overall mechanism, the pressure rises approximately exponentially to a limit; thus, on the assumption that the reaction is homogeneous and of first order, the rate constants can be determined from the half-life.

  13. GPU Accelerated Event Detection Algorithm

    Energy Science and Technology Software Center (ESTSC)

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  14. Parametrized data-driven decomposition for bifurcation analysis, with application to thermo-acoustically unstable systems

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Schmid, Peter J.; Richecoeur, Franck; Durox, Daniel

    2015-03-01

    Dynamic mode decomposition (DMD) belongs to a class of data-driven decomposition techniques, which extracts spatial modes of a constant frequency from a given set of numerical or experimental data. Although the modal shapes and frequencies are a direct product of the decomposition technique, the determination of the respective modal amplitudes is non-unique. In this study, we introduce a new algorithm for defining these amplitudes, which is capable of capturing physical growth/decay rates of the modes within a transient signal and is otherwise not straightforward using the standard DMD algorithm. In addition, a parametric DMD algorithm is introduced for studying dynamical systems going through a bifurcation. The parametric DMD alleviates multiple applications of the DMD decomposition to the system with fixed parametric values by including the bifurcation parameter in the decomposition process. The parametric DMD with amplitude correction is applied to a numerical and experimental data sequence taken from thermo-acoustically unstable systems. Using DMD with amplitude correction, we are able to identify the dominant modes of the transient regime and their respective growth/decay rates leading to the final limit-cycle. In addition, by applying parametrized DMD to images of an oscillating flame, we are able to identify the dominant modes of the bifurcation diagram.

  15. Thermal decomposition of allylbenzene ozonide

    SciTech Connect

    Ewing, J.C.; Church, D.F.; Pryor, W.A. )

    1989-07-19

    Thermal decomposition of allylbenzene ozonide (ABO) at 98{degree}C in the liquid phase yields toluene, bibenzyl, phenylacetaldehyde, formic acid, and (benzyloxy)methyl formate as major products; benzyl chloride is formed when chlorinated solvents are employed. These products, as well as benzyl formate, are formed when ABO is decomposed at 37{degree}C. When the decomposition of ABO is carried out in the presence of 1-butanethiol, the product distribution changes: yields of toluene increase, no bibenzyl is formed, and decreases in yields of (benzyloxy)methyl formate, phenylacetladehyde, and benzyl chloride are observed. The decomposition of 1-octene ozonide (OTO) also was studied for comparison. The activation parameters for both ABO and OTO are similar (28.2 kcal/mol, log A = 13.6 and 26.6 kcal/mol, log A = 12.5, respectively); these data suggest that ozonides decompose by homolysis of the O-O bond, rather than by an alternative synchronous two-bond scission process. When ABO is decomposed at 37{degree}C in the presence of the spin traps 5,5-dimethyl-1-pyrroline N-oxide (DMPO) or 3,3,5,5-tetramethyl-1-pyrroline N-oxide (M{sub 4}PO), ESR signals are observed that are consistent with the trapping of benzyl and other carbon- and oxygen-centered radicals. A mechanism for the thermal decomposition of ABO that involves peroxide bond homolysis and subsequent {beta}-scission is proposed. Thus, Criegee ozonides decompose to give free radicals at quite modest temperatures.

  16. Implementation of parallel matrix decomposition for NIKE3D on the KSR1 system

    SciTech Connect

    Su, Philip S.; Fulton, R.E.; Zacharia, T.

    1995-06-01

    New massively parallel computer architecture has revolutionized the design of computer algorithms and promises to have significant influence on algorithms for engineering computations. Realistic engineering problems using finite element analysis typically imply excessively large computational requirements. Parallel supercomputers that have the potential for significantly increasing calculation speeds can meet these computational requirements. This report explores the potential for the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm on NIKE3D through actual computations. The examples of two- and three-dimensional nonlinear dynamic finite element problems are presented on the Kendall Square Research (KSR1) multiprocessor system, with 64 processors, at Oak Ridge National Laboratory. The numerical results indicate that the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm is attractive for NIKE3D under multi-processor system environments.

  17. Decomposition of frequency characteristics of acoustic emission signals for different types of partial discharges sources

    NASA Astrophysics Data System (ADS)

    Witos, F.; Gacek, Z.; Paduch, P.

    2006-11-01

    The problem touched in the article is decomposition of frequency characteristic of AE signals into elementary form of three-parametrical Gauss function. At the first stage, for modelled curves in form of sum of three-parametrical Gauss peaks, accordance of modelled curve and a curve resulting from a solutions obtained using method with dynamic windows, Levenberg-Marquardt algorithm, genetic algorithms and differential evolution algorithm are discussed. It is founded that analyses carried out by means differential evolution algorithm are effective and the computer system served an analysis of AE signal frequency characteristics was constructed. Decomposition of frequency characteristics for selected AE signals coming from modelled PD sources using different ends of the bushing, and real PD sources in generator coil bars are carried out.

  18. Methanethiol decomposition on Ni(100)

    SciTech Connect

    Castro, M.E.; Ahkter, S.; Golchet, A.; White, J.M. ); Sahin, T. )

    1991-01-01

    Static secondary ion mass spectroscopy (SSIMS), temperature programmed desorption (TPD), and Auger electron spectroscopy (AES) were used under ultrahigh vacuum conditions to study the decomposition of CH{sub 3}SH on Ni(100). Only methane, hydrogen, and the parent molecule are observed in TPD. Complete decomposition to C(a), S(a) and desorbing H{sub 2} is the preferred reaction pathway for low exposures, while desorption of methane is observed at higher coverages. Preadsorbed hydrogen promoted methane desorption. Upon adsorption, and for low coverages, SSIMS evidence indicates S-H bond cleavage into CH{sub 3}S and surface hydrogen. S-H bond cleavage is inhibited for high coverages. The TP-SSIMS data are consistent with an activated C-S bond cleavage in CH{sub 3}S, with an activation energy of 8.81 kcal/mol and preexponential factor of 10{sup 6.5}s{sup {minus}1}. The low preexponential factor is taken as indicating a complex decomposition pathway. A mechanism consistent with the observed data is discussed.

  19. Phlogopite Decomposition, Water, and Venus

    NASA Technical Reports Server (NTRS)

    Johnson, N. M.; Fegley, B., Jr.

    2005-01-01

    Venus is a hot and dry planet with a surface temperature of 660 to 740 K and 30 parts per million by volume (ppmv) water vapor in its lower atmosphere. In contrast Earth has an average surface temperature of 288 K and 1-4% water vapor in its troposphere. The hot and dry conditions on Venus led many to speculate that hydrous minerals on the surface of Venus would not be there today even though they might have formed in a potentially wetter past. Thermodynamic calculations predict that many hydrous minerals are unstable under current Venusian conditions. Thermodynamics predicts whether a particular mineral is stable or not, but we need experimental data on the decomposition rate of hydrous minerals to determine if they survive on Venus today. Previously, we determined the decomposition rate of the amphibole tremolite, and found that it could exist for billions of years at current surface conditions. Here, we present our initial results on the decomposition of phlogopite mica, another common hydrous mineral on Earth.

  20. Thermal decomposition mechanism of disilane.

    PubMed

    Yoshida, Kazumasa; Matsumoto, Keiji; Oguchi, Tatsuo; Tonokura, Kenichi; Koshi, Mitsuo

    2006-04-13

    Thermal decomposition of disilane was investigated using time-of-flight (TOF) mass spectrometry coupled with vacuum ultraviolet single-photon ionization (VUV-SPI) at a temperature range of 675-740 K and total pressure of 20-40 Torr. Si(n)H(m) species were photoionized by VUV radiation at 10.5 eV (118 nm). Concentrations of disilane and trisilane during thermal decomposition of disilane were quantitatively measured using the VUV-SPI method. Formation of Si(2)H(4) species was also examined. On the basis of pressure-dependent rate constants of disilane dissociation reported by Matsumoto et al. [J. Phys. Chem. A 2005, 109, 4911], kinetic simulation including gas-phase and surface reactions was performed to analyze thermal decomposition mechanisms of disilane. The branching ratio for (R1) Si(2)H(6) --> SiH(4) + SiH(2)/(R2) Si(2)H(6) --> H(2) + H(3)SiSiH was derived by the pressure-dependent rate constants. Temperature and reaction time dependences of disilane loss and formation of trisilane were well represented by the kinetic simulation. Comparison between the experimental results and the kinetic simulation results suggested that about 70% of consumed disilane was converted to trisilane, which was observed as one of the main reaction products under the present experimental conditions. PMID:16599440

  1. Performance of the Wavelet Decomposition on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.

  2. Tensor network decompositions in the presence of a global symmetry

    SciTech Connect

    Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre

    2010-11-15

    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.

  3. Supervised Single-Channel Speech Separation via Sparse Decomposition Using Periodic Signal Models

    NASA Astrophysics Data System (ADS)

    Nakashizuka, Makoto; Okumura, Hiroyuki; Iiguni, Youji

    In this paper, we propose a method for supervised single-channel speech separation through sparse decomposition using periodic signal models. The proposed separation method employs sparse decomposition, which decomposes a signal into a set of periodic signals under a sparsity penalty. In order to achieve separation through sparse decomposition, the decomposed periodic signals have to be assigned to the corresponding sources. For the assignment of the periodic signal, we introduce clustering using a K-means algorithm to group the decomposed periodic signals into as many clusters as the number of speakers. After the clustering, each cluster is assigned to its corresponding speaker using preliminarily learnt codebooks. Through separation experiments, we compare our method with MaxVQ, which performs separation on the frequency spectrum domain. The experimental results in terms of signal-to-distortion ratio show that the proposed sparse decomposition method is comparable to the frequency domain approach and has less computational costs for assignment of speech components.

  4. Two decoupling methods for non-isothermal DSC results of AIBN decomposition.

    PubMed

    Zhang, Cai-Xing; Lu, Gui-Bin; Chen, Li-Ping; Chen, Wang-Hua; Peng, Min-Jun; Lv, Jia-Yu

    2015-03-21

    During thermal decomposition of azobisisobutyronitrile (AIBN), the endothermic process of phase transition disturbed exothermic decomposition, which brought deformation in its thermal graphs. Therefore, exact kinetic parameters of the decomposition could not be obtained by the existing kinetics analytic models, and the accurate enthalpy data of the decomposition and phase transition were not available. Two methods, i.e., a solvent method and a mathematical method, were introduced in this paper to resolve the coupling phenomenon. In the former method, AIBN was dissolved into aniline to eliminate the endothermic process and obtain curves of the liquid-state decomposition. In the latter method, MATLAB software was employed to get the "pure" exothermic decomposition curve without the influence of phase transition by fitting coupling curves within the section after the transition point and extrapolating to the initial stage of decomposition. Moreover, the kinetic parameters of the "pure" exothermic decomposition of AIBN obtained by the mathematical fitting agreed with the results from the solvent method, verifying the accuracy of the decoupling. The research is of great significance for comprehending the exact characteristics of thermal behaviors and safety parameters of AIBN. It also provides a great help to determine the safe operating temperature and alarm temperature for processes in industry. PMID:25479145

  5. PrinCCes: Continuity-based geometric decomposition and systematic visualization of the void repertoire of proteins.

    PubMed

    Czirják, Gábor

    2015-11-01

    Grooves and pockets on the surface, channels through the protein, the chambers or cavities, and the tunnels connecting the internal points to each other or to the external fluid environment are fundamental determinants of a wide range of biological functions. PrinCCes (Protein internal Channel & Cavity estimation) is a computer program supporting the visualization of voids. It includes a novel algorithm for the decomposition of the entire void volume of the protein or protein complex to individual entities. The decomposition is based on continuity. An individual void is defined by uninterrupted extension in space: a spherical probe can freely move between any two internal locations of a continuous void. Continuous voids are detected irrespective of their topological complexity, they may contain any number of holes and bifurcations. The voids of a protein can be visualized one by one or in combinations as triangulated surfaces. The output is automatically exported to free VMD (Visual Molecular Dynamics) or Chimera software, allowing the 3D rotation of the surfaces and the production of publication quality images. PrinCCes with graphic user interface and command line versions are available for MS Windows and Linux. The source code and executable can be downloaded at any of the following links: http://scholar.semmelweis.hu/czirjakgabor/s/princces/#t1 https://github.com/CzirjakGabor/PrinCCes http://1drv.ms/1bP9iJ3. PMID:26409191

  6. Efficient Algorithm for Rectangular Spiral Search

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Breckenridge, William

    2008-01-01

    An algorithm generates grid coordinates for a computationally efficient spiral search pattern covering an uncertain rectangular area spanned by a coordinate grid. The algorithm does not require that the grid be fixed; the algorithm can search indefinitely, expanding the grid and spiral, as needed, until the target of the search is found. The algorithm also does not require memory of coordinates of previous points on the spiral to generate the current point on the spiral.

  7. Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors

    SciTech Connect

    Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep

    2014-12-01

    The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.

  8. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    NASA Astrophysics Data System (ADS)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  9. Three-Component Decomposition Based on Stokes Vector for Compact Polarimetric SAR.

    PubMed

    Wang, Hanning; Zhou, Zhimin; Turnbull, John; Song, Qian; Qi, Feng

    2015-01-01

    In this paper, a three-component decomposition algorithm is proposed for processing compact polarimetric SAR images. By using the correspondence between the covariance matrix and the Stokes vector, three-component scattering models for CTLR and DCP modes are established. The explicit expression of decomposition results is then derived by setting the contribution of volume scattering as a free parameter. The degree of depolarization is taken as the upper bound of the free parameter, for the constraint that the weighting factor of each scattering component should be nonnegative. Several methods are investigated to estimate the free parameter suitable for decomposition. The feasibility of this algorithm is validated by AIRSAR data over San Francisco and RADARSAT-2 data over Flevoland. PMID:26393610

  10. Three-Component Decomposition Based on Stokes Vector for Compact Polarimetric SAR

    PubMed Central

    Wang, Hanning; Zhou, Zhimin; Turnbull, John; Song, Qian; Qi, Feng

    2015-01-01

    In this paper, a three-component decomposition algorithm is proposed for processing compact polarimetric SAR images. By using the correspondence between the covariance matrix and the Stokes vector, three-component scattering models for CTLR and DCP modes are established. The explicit expression of decomposition results is then derived by setting the contribution of volume scattering as a free parameter. The degree of depolarization is taken as the upper bound of the free parameter, for the constraint that the weighting factor of each scattering component should be nonnegative. Several methods are investigated to estimate the free parameter suitable for decomposition. The feasibility of this algorithm is validated by AIRSAR data over San Francisco and RADARSAT-2 data over Flevoland. PMID:26393610

  11. Improving radiation data quality of USDA UV-B monitoring and research program and evaluating UV decomposition in DayCent and its ecological impacts

    NASA Astrophysics Data System (ADS)

    Chen, Maosi

    from an improved cloud screening algorithm that utilizes an iterative rejection of cloudy points based on a decreasing tolerance of unstable optical depth behavior when calibration information is unknown. A MODTRAN radiative transfer model simulation showed the new cloud screening algorithm was capable of screening cloudy points while retaining clear-sky points. The comparison results showed that the cloud-free points determined by the new cloud screening algorithm generated significantly (56%) more and unbiased Langley offset voltages (VLOs) for both partly cloudy days and sunny days at two testing sites, Hawaii and Florida. The V¬LOs are proportional to the radiometric sensitivity. The stability of the calibration is also improved by the development of a two-stage reference channel calibration method for collocated UV-MFRSR and MFRSR instruments. Special channels where aerosol is the only contributor to total optical depth (TOD) variation (e.g. 368-nm channel) were selected and the radiative transfer model (MODTRAN) used to calculate direct normal and diffuse horizontal ratios which were used to evaluate the stability of TOD in cloud-free points. The spectral dependence of atmospheric constituents' optical properties and previously calibrated channels were used to find stable TOD points and perform Langley calibration at spectrally adjacent channels. The test of this method on the UV-B program site at Homestead, Florida (FL02) showed that the new method generated more clustered and abundant VLOs at all (UV-) MFRSR channels and potentially improved the accuracy by 2-4% at most channels and over 10% at 300-nm and 305-nm channels. In the second major part of this work, I calibrated the DayCent-UV model with ecosystem variables (e.g. soil water, live biomass), allowed maximum photodecay rate to vary with litter's initial lignin fraction in the model, and validated the optimized model with LIDET observation of remaining carbon and nitrogen at three semi-arid sites. I

  12. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is

  13. Accelerated ray tracing algorithm under urban macro cell

    NASA Astrophysics Data System (ADS)

    Liu, Z.-Y.; Guo, L.-X.; Guan, X.-W.

    2015-10-01

    In this study, an ray tracing propagation prediction model, which is based on creating a virtual source tree, is used because of their high efficiency and reliable prediction accuracy. In addition, several acceleration techniques are also adopted to improve the efficiency of ray-tracing-based prediction over large areas. However, in the process of employing the ray tracing method for coverage zone prediction, runtime is linearly proportional to the total number of prediction points, leading to large and sometimes prohibitive computation time requirements under complex geographical urban macrocell environments. In order to overcome this bottleneck, the compute unified device architecture (CUDA), which provides fine-grained data parallelism and thread parallelism, is implemented to accelerate the calculation. Taking full advantage of tens of thousands of threads in CUDA program, the decomposition of the coverage prediction problem is firstly conducted by partitioning the image tree and the visible prediction points to different sources. Then, we make every thread calculate the electromagnetic field of one propagation path and then collect these results. Comparing this parallel algorithm with the traditional sequential algorithm, it can be found that computational efficiency has been improved.

  14. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    SciTech Connect

    Weinstein, Marvin; Auerbach, Assa; Chandra, V.Ravi; /Technion

    2011-11-04

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. The lattice of size N is partitioned into two subclusters. At each iteration the Lanczos vector is projected into a set of n{sub svd} smaller subcluster vectors using singular value decomposition. For low entanglement entropy S{sub ee}, (satisfied by short range Hamiltonians), we expect the truncation error to vanish as exp(-n{sup 1/S{sub ee}}{sub svd}). Convergence is tested for the Heisenberg model on Kagome clusters of up to 36 sites, with no symmetries exploited, using less than 15GB of memory. Generalization to multiple partitioning is discussed.

  15. A full variational calculation based on a tensor product decomposition

    NASA Astrophysics Data System (ADS)

    Senese, Frederick A.; Beattie, Christopher A.; Schug, John C.; Viers, Jimmy W.; Watson, Layne T.

    1989-08-01

    A new direct full variational approach exploits a tensor (Kronecker) product decomposition of the Hamiltonian. Explicit assembly and storage of the Hamiltonian matrix is avoided by using the Kronecker product structure to form matrix-vector products directly from the molecular integrals. Computation-intensive integral transformations and formula tapes are unnecessary. The wavefunction is expanded in terms of spin-free primitive kets rather than Slater determinants or configuration state functions, and the expansion is equivalent to a full configuration interaction expansion. The approach suggests compact storage schemes and algorithms which are naturally suited to parallel and pipelined machines.

  16. Global patterns in litter decomposition: a synthesis.

    NASA Astrophysics Data System (ADS)

    Auch, W. E.; Ross, D. S.

    2007-12-01

    Leaf and coarse woody debris (LCWD) decay catalyzes the biochemical mechanisms of the soil-aboveground interface, and should be an important component of climate change models that address carbon and nitrogen. There is a clear need for the identification of determinant climate or litter chemistry parameters at the global scale. Local and global decay is commonly attributed to litter chemistry and climate, respectively. The objective of this synthesis was to illustrate LCWD decay across a global climate-chemistry continuum and contrast results with a previous assessment via both standard first-order (|k|) decay kinetics and gradient exponent values arranged in order of influence from initial to latter decay stages. Results suggest greater initial LCWD cation concentrations yielded the fastest initial rates of decomposition and most climatic indices appeared relevant at intermediate stages of decay. Elevation and refractory LCWD carbon (i.e. carbon, lignin, and tannins) were inversely correlated with decay, prolonging the process and possibly acting in concert as "end-point" determinants. Furthermore, the initial influence of nitrogen and phosphorus is universal across LCWD-type as well as ecoregion. Climate acts in a transitional role between easily solubilized and late or aromatic substrate decay. Global and continental carbon cycling assumptions and models must acknowledge: i) the influence of LCWD cation and N concentration during initial fragmentation, leaching, and transformation; ii) climate, specifically seasonal temperature averages > evapotranspiration > precipitation, during the interim; and iii) the ever-present influence of seasonality and litter aromatic components. Key Words: Leaf and Coarse Woody Debris (LCWD) decomposition, |k|, first-order kinetics, Carbon Cycle, Global Climate Change (GCC), Actual Evapotranspiration (AET).

  17. Decomposition Technique for Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  18. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  19. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Astrophysics Data System (ADS)

    Bahethi, O. P.

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  20. Bio-empirical mode decomposition: visible and infrared fusion using biologically inspired empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Sissinto, Paterne; Ladeji-Osias, Jumoke

    2013-07-01

    Bio-EMD, a biologically inspired fusion of visible and infrared (IR) images based on empirical mode decomposition (EMD) and color opponent processing, is introduced. First, registered visible and IR captures of the same scene are decomposed into intrinsic mode functions (IMFs) through EMD. The fused image is then generated by an intuitive opponent processing the source IMFs. The resulting image is evaluated based on the amount of information transferred from the two input images, the clarity of details, the vividness of depictions, and range of meaningful differences in lightness and chromaticity. We show that this opponent processing-based technique outperformed other algorithms based on pixel intensity and multiscale techniques. Additionally, Bio-EMD transferred twice the information to the fused image compared to other methods, providing a higher level of sharpness, more natural-looking colors, and similar contrast levels. These results were obtained prior to optimization of color opponent processing filters. The Bio-EMD algorithm has potential applicability in multisensor fusion covering visible bands, forensics, medical imaging, remote sensing, natural resources management, etc.

  1. The design and implementation of signal decomposition system of CL multi-wavelet transform based on DSP builder

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Wang, Zhihui

    2015-12-01

    With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.

  2. Hierarchical clustering of EMD based interest points for road sign detection

    NASA Astrophysics Data System (ADS)

    Khan, Jesmin; Bhuiyan, Sharif; Adhami, Reza

    2014-04-01

    This paper presents an automatic road traffic signs detection and recognition system based on hierarchical clustering of interest points and joint transform correlation. The proposed algorithm consists of the three following stages: interest points detection, clustering of those points and similarity search. At the first stage, good discriminative, rotation and scale invariant interest points are selected from the image edges based on the 1-D empirical mode decomposition (EMD). We propose a two-step unsupervised clustering technique, which is adaptive and based on two criterion. In this context, the detected points are initially clustered based on the stable local features related to the brightness and color, which are extracted using Gabor filter. Then points belonging to each partition are reclustered depending on the dispersion of the points in the initial cluster using position feature. This two-step hierarchical clustering yields the possible candidate road signs or the region of interests (ROIs). Finally, a fringe-adjusted joint transform correlation (JTC) technique is used for matching the unknown signs with the existing known reference road signs stored in the database. The presented framework provides a novel way to detect a road sign from the natural scenes and the results demonstrate the efficacy of the proposed technique, which yields a very low false hit rate.

  3. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  4. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  5. Cooperative terrain model acquisition by two point-robots in planar polygonal terrains

    SciTech Connect

    Rao, N.S.V.; Protopopescu, V.

    1994-11-29

    We address the model acquisition problem for an unknown terrain by a team of two robots. The terrain may be cluttered by a finite number of polygonal obstacles with unknown shapes and positions. The robots are point-sized and equipped with visual sensors which acquire all visible parts of the terrain by scanning from their locations. The robots communicate with each other via wireless connection. The performance is measured by the number of the sensor (scan) operations which are assumed to be the most time-consuming/expensive of all the robot operations. We employ the restricted visibility graph methods in a hierarchiacal setup. For terrains with convex obstacles, the sensing time can be halved compared to a single robot implementation. For terrains with concave corners, the performance of the algorithm depends on the number of concave regions and their depths. A hierarchical decomposition of the restricted visibility graph into 2-connected components and trees is considered. Performance for the 2-robot team is expressed in terms of sizes of 2-connected components, and the sizes and diameters of the trees. The proposed algorithm and analysis can be applied to the methods based on Voronoi diagram and trapezoidal decomposition.

  6. Conductimetric determination of decomposition of silicate melts

    NASA Technical Reports Server (NTRS)

    Kroeger, C.; Lieck, K.

    1986-01-01

    A description of a procedure is given to detect decomposition of silicate systems in the liquid state by conductivity measurements. Onset of decomposition can be determined from the temperature curves of resistances measured on two pairs of electrodes, one above the other. Degree of decomposition can be estimated from temperature and concentration dependency of conductivity of phase boundaries. This procedure was tested with systems PbO-B2O3 and PbO-B2O3-SiO2.

  7. Decomposition of Multi-player Games

    NASA Astrophysics Data System (ADS)

    Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael

    Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.

  8. Improving radiation data quality of USDA UV-B monitoring and research program and evaluating UV decomposition in DayCent and its ecological impacts

    NASA Astrophysics Data System (ADS)

    Chen, Maosi

    from an improved cloud screening algorithm that utilizes an iterative rejection of cloudy points based on a decreasing tolerance of unstable optical depth behavior when calibration information is unknown. A MODTRAN radiative transfer model simulation showed the new cloud screening algorithm was capable of screening cloudy points while retaining clear-sky points. The comparison results showed that the cloud-free points determined by the new cloud screening algorithm generated significantly (56%) more and unbiased Langley offset voltages (VLOs) for both partly cloudy days and sunny days at two testing sites, Hawaii and Florida. The V¬LOs are proportional to the radiometric sensitivity. The stability of the calibration is also improved by the development of a two-stage reference channel calibration method for collocated UV-MFRSR and MFRSR instruments. Special channels where aerosol is the only contributor to total optical depth (TOD) variation (e.g. 368-nm channel) were selected and the radiative transfer model (MODTRAN) used to calculate direct normal and diffuse horizontal ratios which were used to evaluate the stability of TOD in cloud-free points. The spectral dependence of atmospheric constituents' optical properties and previously calibrated channels were used to find stable TOD points and perform Langley calibration at spectrally adjacent channels. The test of this method on the UV-B program site at Homestead, Florida (FL02) showed that the new method generated more clustered and abundant VLOs at all (UV-) MFRSR channels and potentially improved the accuracy by 2-4% at most channels and over 10% at 300-nm and 305-nm channels. In the second major part of this work, I calibrated the DayCent-UV model with ecosystem variables (e.g. soil water, live biomass), allowed maximum photodecay rate to vary with litter's initial lignin fraction in the model, and validated the optimized model with LIDET observation of remaining carbon and nitrogen at three semi-arid sites. I

  9. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa

    2012-02-01

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007

  10. Three-dimensional multistage network applying for facial images decomposition

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid I.; Chepornyuk, Serge V.; Grudin, Maxim A.; Harvey, David M.; Kutaev, Yuri F.; Gertsiy, Alexander A.; Zahoruiko, Lubov V.

    1997-09-01

    The paper presents a novel three-dimensional network and its application to pattern analysis. This is a multistage architecture which investigates partial correlations between structural image components. Initially the image is partitioned to be processed in parallel channels. In each channel, the structural components are transformed and subsequently separated depending on their informational activity, to be mixed with the components from other channels for further processing. An output result is represented as a pattern vector, whose components are computed one at a time to allow the quickest possible response. The paper presents an algorithm applied to facial images decomposition. The input gray-scale image is transformed so that each pixel contains information about the spatial structure of its neighborhood. A three-level representation of gray-scale image is used in order for each pixel to contain the maximum amount of structural information. The most correlated information is extracted first, making the algorithm tolerant to minor structural changes.

  11. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CERTIFICATION VOLUNTARY INSPECTION OF RABBITS AND EDIBLE PRODUCTS THEREOF Disposition of Diseased Rabbit Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  12. Theoretical investigation of germane and germylene decomposition kinetics.

    PubMed

    Polino, Daniela; Barbato, Alessandro; Cavallotti, Carlo

    2010-09-21

    The dissociation kinetics of germane and its decomposition products were studied determining microcanonical kinetic constants with RRKM theory and integrating the master equation using a stochastic approach. Relevant reaction parameters were calculated through first principles calculations. Structures of reactants and transition states were determined at the B3LYP/aug-cc-pvtz level while energies were computed at the CCSD(T) level and extended to the complete basis set limit. Though similar for many aspects to the kinetics of decomposition of SiH(4), GeH(4) has some peculiar features that indicate a different chemical reactivity. It was found that the main decomposition channel leads to the formation of germylene, GeH(2), which rapidly decomposes to atomic Ge and H(2). The dissociation of GeH(2) to Ge and H(2) is a formally spin forbidden reaction characterized by an activation energy of 160.3 kJ mol(-1) calculated at the minimum energy crossing point between the singlet and triplet states. The intersystem crossing probability was explicitly included in the microcanonical simulations through Landau-Zener theory. It was found that its effect on the reaction rate is almost negligible, both because of the large spin-orbit coupling between the singlet and triplet states and for the fall off conditions prevailing in the examined pressure and temperature ranges. Kinetic constants of the main decomposition channels were determined as a function of pressure and temperature between 0.0013 and 10 bar and 1100 and 1700 K. The high and low pressure kinetic constants for GeH(4) decomposition are 6.4 x 10(13) (T/K)(0.272) exp(-26 700 K/T) and 2.7 x 10(48) (T/K)(-9.05) exp(-31 600 K/T), while those for GeH(2) are 6.02 x 10(12) (T/K)(0.203) exp(-19 660 K/T) and 1.6 x 10(26) (T/K)(-3.06) exp(-21 121 K/T), respectively. A quantitative agreement with experimental data for GeH(4) decomposition could be obtained adopting a downward energy transfer parameter of 340 x (T/298 K)(0.85) cm

  13. GLAS Spacecraft Pointing Study

    NASA Technical Reports Server (NTRS)

    Born, George H.; Gold, Kenn; Ondrey, Michael; Kubitschek, Dan; Axelrad, Penina; Komjathy, Attila

    1998-01-01

    Science requirements for the GLAS mission demand that the laser altimeter be pointed to within 50 m of the location of the previous repeat ground track. The satellite will be flown in a repeat orbit of 182 days. Operationally, the required pointing information will be determined on the ground using the nominal ground track, to which pointing is desired, and the current propagated orbit of the satellite as inputs to the roll computation algorithm developed by CCAR. The roll profile will be used to generate a set of fit coefficients which can be uploaded on a daily basis and used by the on-board attitude control system. In addition, an algorithm has been developed for computation of the associated command quaternions which will be necessary when pointing at targets of opportunity. It may be desirable in the future to perform the roll calculation in an autonomous real-time mode on-board the spacecraft. GPS can provide near real-time tracking of the satellite, and the nominal ground track can be stored in the on-board computer. It will be necessary to choose the spacing of this nominal ground track to meet storage requirements in the on-board environment. Several methods for generating the roll profile from a sparse reference ground track are presented.

  14. Tipping Point

    MedlinePlus Videos and Cool Tools

    ... Tipping Point by CPSC Blogger September 22 appliance child Childproofing CPSC danger death electrical fall furniture head ... TV falls with about the same force as child falling from the third story of a building. ...

  15. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  16. Parquet decomposition calculations of the electronic self-energy

    NASA Astrophysics Data System (ADS)

    Gunnarsson, O.; Schäfer, T.; LeBlanc, J. P. F.; Merino, J.; Sangiovanni, G.; Rohringer, G.; Toschi, A.

    2016-06-01

    The parquet decomposition of the self-energy into classes of diagrams, those associated with specific scattering processes, can be exploited for different scopes. In this work, the parquet decomposition is used to unravel the underlying physics of nonperturbative numerical calculations. We show the specific example of dynamical mean field theory and its cluster extensions [dynamical cluster approximation (DCA)] applied to the Hubbard model at half-filling and with hole doping: These techniques allow for a simultaneous determination of two-particle vertex functions and self-energies and, hence, for an essentially "exact" parquet decomposition at the single-site or at the cluster level. Our calculations show that the self-energies in the underdoped regime are dominated by spin-scattering processes, consistent with the conclusions obtained by means of the fluctuation diagnostics approach [O. Gunnarsson et al., Phys. Rev. Lett. 114, 236402 (2015), 10.1103/PhysRevLett.114.236402]. However, differently from the latter approach, the parquet procedure displays important changes with increasing interaction: Even for relatively moderate couplings, well before the Mott transition, singularities appear in different terms, with the notable exception of the predominant spin channel. We explain precisely how these singularities, which partly limit the utility of the parquet decomposition and, more generally, of parquet-based algorithms, are never found in the fluctuation diagnostics procedure. Finally, by a more refined analysis, we link the occurrence of the parquet singularities in our calculations to a progressive suppression of charge fluctuations and the formation of a resonance valence bond state, which are typical hallmarks of a pseudogap state in DCA.

  17. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  18. Iterative filtering decomposition based on local spectral evolution kernel.

    PubMed

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2012-03-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  19. Empirical modal decomposition applied to cardiac signals analysis

    NASA Astrophysics Data System (ADS)

    Beya, O.; Jalil, B.; Fauvet, E.; Laligant, O.

    2010-01-01

    In this article, we present the method of empirical modal decomposition (EMD) applied to the electrocardiograms and phonocardiograms signals analysis and denoising. The objective of this work is to detect automatically cardiac anomalies of a patient. As these anomalies are localized in time, therefore the localization of all the events should be preserved precisely. The methods based on the Fourier Transform (TFD) lose the localization property [13] and in the case of Wavelet Transform (WT) which makes possible to overcome the problem of localization, but the interpretation remains still difficult to characterize the signal precisely. In this work we propose to apply the EMD (Empirical Modal Decomposition) which have very significant properties on pseudo periodic signals. The second section describes the algorithm of EMD. In the third part we present the result obtained on Phonocardiograms (PCG) and on Electrocardiograms (ECG) test signals. The analysis and the interpretation of these signals are given in this same section. Finally, we introduce an adaptation of the EMD algorithm which seems to be very efficient for denoising.

  20. Simplified approaches to some nonoverlapping domain decomposition methods

    SciTech Connect

    Xu, Jinchao

    1996-12-31

    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  1. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  2. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  3. Ground point filtering of UAV-based photogrammetric point clouds

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Seijmonsbergen, Arie; Masselink, Rens; Keesstra, Saskia

    2016-04-01

    Unmanned Aerial Vehicles (UAVs) have proved invaluable for generating high-resolution and multi-temporal imagery. Based on photographic surveys, 3D surface reconstructions can be derived photogrammetrically so producing point clouds, orthophotos and surface models. For geomorphological or ecological applications it may be necessary to separate ground points from vegetation points. Existing filtering methods are designed for point clouds derived using other methods, e.g. laser scanning. The purpose of this paper is to test three filtering algorithms for the extraction of ground points from point clouds derived from low-altitude aerial photography. Three subareas were selected from a single flight which represent different scenarios: 1) low relief, sparsely vegetated area, 2) low relief, moderately vegetated area, 3) medium relief and moderately vegetated area. The three filtering methods are used to classify ground points in different ways, based on 1) RGB color values from training samples, 2) TIN densification as implemented in LAStools, and 3) an iterative surface lowering algorithm. Ground points are then interpolated into a digital terrain model using inverse distance weighting. The results suggest that different landscapes require different filtering methods for optimal ground point extraction. While iterative surface lowering and TIN densification are fully automated, color-based classification require fine-tuning in order to optimize the filtering results. Finally, we conclude that filtering photogrammetric point clouds could provide a cheap alternative to laser scan surveys for creating digital terrain models in sparsely vegetated areas.

  4. Univariate time series forecasting algorithm validation

    NASA Astrophysics Data System (ADS)

    Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan

    2014-12-01

    Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.

  5. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  6. Wavefront reconstruction by modal decomposition.

    PubMed

    Schulze, Christian; Naidoo, Darryl; Flamm, Daniel; Schmidt, Oliver A; Forbes, Andrew; Duparré, Michael

    2012-08-27

    We propose a new method to determine the wavefront of a laser beam based on modal decomposition by computer-generated holograms. The hologram is encoded with a transmission function suitable for measuring the amplitudes and phases of the modes in real-time. This yields the complete information about the optical field, from which the Poynting vector and the wavefront are deduced. Two different wavefront reconstruction options are outlined: reconstruction from the phase for scalar beams, and reconstruction from the Poynting vector for inhomogeneously polarized beams. Results are compared to Shack-Hartmann measurements that serve as a reference and are shown to reproduce the wavefront and phase with very high fidelity. PMID:23037024

  7. Metallo-organic decomposition films

    NASA Technical Reports Server (NTRS)

    Gallagher, B. D.

    1985-01-01

    A summary of metallo-organic deposition (MOD) films for solar cells was presented. The MOD materials are metal ions compounded with organic radicals. The technology is evolving quickly for solar cell metallization. Silver compounds, especially silver neodecanoate, were developed which can be applied by thick-film screening, ink-jet printing, spin-on, spray, or dip methods. Some of the advantages of MOD are: high uniform metal content, lower firing temperatures, decomposition without leaving a carbon deposit or toxic materials, and a film that is stable under ambient conditions. Molecular design criteria were explained along with compounds formulated to date, and the accompanying reactions for these compounds. Phase stability and the other experimental and analytic results of MOD films were presented.

  8. Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition

    SciTech Connect

    Zhang, Zheng; Yang, Xiu; Oseledets, Ivan; Karniadakis, George E.; Daniel, Luca

    2015-01-31

    Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.

  9. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  10. Thermal decomposition of HN(3).

    PubMed

    Knyazev, Vadim D; Korobeinichev, Oleg P

    2010-01-21

    The two-channel thermal decomposition of hydrogen azide, HN(3), was studied computationally. The reaction produces triplet or singlet NH and N(2). A model of the reaction was created on the basis of the theoretical study of the reaction potential-energy surface and microscopic reaction rates by Besora and Harvey (Besora, M.; Harvey, J. N. J. Chem. Phys. 2008, 129, 044303) and the experimental data on the energy-dependent rate constants reported by Foy et al. (Foy, B. R.; Casassa, M. P.; Stephenson, J. C.; King, D. S. J. Chem. Phys. 1990, 92, 2782) The properties of the model were adjusted to fit the calculated k(E) dependence to the experimental one. The experiments on thermal decomposition of HN(3) described in the literature were analyzed via kinetic modeling; the results of the analysis demonstrate that all but one of the existing studies were affected by contributions from secondary kinetics. The model of the reaction was then used in master-equation calculations of the pressure effects and the value of the critical energy transfer parameter, DeltaE(down), was adjusted based on agreement with the experimental k(T,P) data. Finally, the model was used to determine pressure- and temperature-dependent rate constants for both channels of reaction 1, which do not conform to the traditional formalism of low-pressure-limit and falloff description. Uncertainties of the model and their influence on the calculated thermal rate constant values were analyzed. Finally, parametrized expression for rate coefficients were provided for a wide range of temperatures and pressures. PMID:19916540

  11. Theoretical study of the decomposition pathways and products of C5- perfluorinated ketone (C5 PFK)

    NASA Astrophysics Data System (ADS)

    Fu, Yuwei; Wang, Xiaohua; Li, Xi; Yang, Aijun; Han, Guohui; Lu, Yanhui; Wu, Yi; Rong, Mingzhe

    2016-08-01

    Due to the high global warming potential (GWP) and increasing environmental concerns, efforts on searching the alternative gases to SF6, which is predominantly used as insulating and interrupting medium in high-voltage equipment, have become a hot topic in recent decades. Overcoming the drawbacks of the existing candidate gases, C5- perfluorinated ketone (C5 PFK) was reported as a promising gas with remarkable insulation capacity and the low GWP of approximately 1. Experimental measurements of the dielectric strength of this novel gas and its mixtures have been carried out, but the chemical decomposition pathways and products of C5 PFK during breakdown are still unknown, which are the essential factors in evaluating the electric strength of this gas in high-voltage equipment. Therefore, this paper is devoted to exploring all the possible decomposition pathways and species of C5 PFK by density functional theory (DFT). The structural optimizations, vibrational frequency calculations and energy calculations of the species involved in a considered pathway were carried out with DFT-(U)B3LYP/6-311G(d,p) method. Detailed potential energy surface was then investigated thoroughly by the same method. Lastly, six decomposition pathways of C5 PFK decomposition involving fission reactions and the reactions with a transition states were obtained. Important intermediate products were also determined. Among all the pathways studied, the favorable decomposition reactions of C5 PFK were found, involving C-C bond ruptures producing Ia and Ib in pathway I, followed by subsequent C-C bond ruptures and internal F atom transfers in the decomposition of Ia and Ib presented in pathways II + III and IV + V, respectively. Possible routes were pointed out in pathway III and lead to the decomposition of IIa, which is the main intermediate product found in pathway II of Ia decomposition. We also investigated the decomposition of Ib, which can undergo unimolecular reactions to give the formation

  12. Metallo-Organic Decomposition (MOD) film development

    NASA Technical Reports Server (NTRS)

    Parker, J.

    1986-01-01

    The processing techniques and problems encountered in formulating metallo-organic decomposition (MOD) films used in contracting structures for thin solar cells are described. The use of thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) techniques performed at Jet Propulsion Laboratory (JPL) in understanding the decomposition reactions lead to improvements in process procedures. The characteristics of the available MOD films were described in detail.

  13. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  14. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  15. 9 CFR 381.93 - Decomposition.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall...

  16. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  17. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  18. Chinese Orthographic Decomposition and Logographic Structure

    ERIC Educational Resources Information Center

    Cheng, Chao-Ming; Lin, Shan-Yuan

    2013-01-01

    "Chinese orthographic decomposition" refers to a sense of uncertainty about the writing of a well-learned Chinese character following a prolonged inspection of the character. This study investigated the decomposition phenomenon in a test situation in which Chinese characters were repeatedly presented in a word context and assessed…

  19. English and Turkish Pupils' Understanding of Decomposition

    ERIC Educational Resources Information Center

    Cetin, Gulcan

    2007-01-01

    This study aimed to describe seventh grade English and Turkish students' levels of understanding of decomposition. Data were analyzed descriptively from the students' written responses to four diagnostic questions about decomposition. Results revealed that the English students had considerably higher sound understanding and lower no understanding…

  20. 9 CFR 354.131 - Decomposition.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by...

  1. Sampling Stoichiometry: The Decomposition of Hydrogen Peroxide.

    ERIC Educational Resources Information Center

    Clift, Philip A.

    1992-01-01

    Describes a demonstration of the decomposition of hydrogen peroxide to provide an interesting, quantitative illustration of the stoichiometric relationship between the decomposition of hydrogen peroxide and the formation of oxygen gas. This 10-minute demonstration uses ordinary hydrogen peroxide and yeast that can be purchased in a supermarket.…

  2. Regular Decompositions for H(div) Spaces

    SciTech Connect

    Kolev, Tzanio; Vassilevski, Panayot

    2012-01-01

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  3. Morphological Decomposition in Reading Hebrew Homographs.

    PubMed

    Miller, Paul; Liran-Hazan, Batel; Vaknin, Vered

    2016-06-01

    The present work investigates whether and how morphological decomposition processes bias the reading of Hebrew heterophonic homographs, i.e., unique orthographic patterns that are associated with two separate phonological, semantic entities depicted by means of two morphological structures (linear and nonlinear). In order to reveal the nature of morphological processes involved in the reading of Hebrew homographs, we tested 146 university students with three computerized experiments, each experiment focusing on a different level of processing. Participants were divided into three experimental groups given that the three experiments used the same stimulus lists. Evidence obtained from the analysis of the participants' processing time and processing accuracy points to a propensity to process heterophonic homographs by default as morpho-syntactically simple rather than complex words. Findings are discussed with reference to assumptions made by Dual-Route models regarding the importance of morphological knowledge in fast and accurate access of written words' representations which mediate the retrieval of their meanings with direct reference to the context in which they occur. PMID:25935578

  4. Thermal decomposition of bioactive sodium titanate surfaces

    NASA Astrophysics Data System (ADS)

    Ravelingien, Matthieu; Mullens, Steven; Luyten, Jan; Meynen, Vera; Vinck, Evi; Vervaet, Chris; Remon, Jean Paul

    2009-09-01

    Alkali-treated orthopaedic titanium surfaces have earlier shown to induce apatite deposition. A subsequent heat treatment under air improved the adhesion of the sodium titanate layer but decreased the rate of apatite deposition. Furthermore, insufficient attention was paid to the sensitivity of titanium substrates to oxidation and nitriding during heat treatment under air. Therefore, in the present study, alkali-treated titanium samples were heat-treated under air, argon flow or vacuum. The microstructure and composition of their surfaces were characterized to clarify what mechanism is responsible for inhibiting in vitro calcium phosphate deposition after heat treatment. All heat treatments under various atmospheres turned out to be detrimental for apatite deposition. They led to the thermal decomposition of the dense sodium titanate basis near the interface with the titanium substrate. Depending on the atmosphere, several forms of Ti yO z were formed and Na 2O was sublimated. Consequently, less exchangeable sodium ions remained available. This pointed to the importance of the ion exchange capacity of the sodium titanate layer for in vitro bioactivity.

  5. A unified statistical framework for material decomposition using multienergy photon counting x-ray detectors

    SciTech Connect

    Choi, Jiyoung; Kang, Dong-Goo; Kang, Sunghoon; Sung, Younghun; Ye, Jong Chul

    2013-09-15

    Purpose: Material decomposition using multienergy photon counting x-ray detectors (PCXD) has been an active research area over the past few years. Even with some success, the problem of optimal energy selection and three material decomposition including malignant tissue is still on going research topic, and more systematic studies are required. This paper aims to address this in a unified statistical framework in a mammographic environment.Methods: A unified statistical framework for energy level optimization and decomposition of three materials is proposed. In particular, an energy level optimization algorithm is derived using the theory of the minimum variance unbiased estimator, and an iterative algorithm is proposed for material composition as well as system parameter estimation under the unified statistical estimation framework. To verify the performance of the proposed algorithm, the authors performed simulation studies as well as real experiments using physical breast phantom and ex vivo breast specimen. Quantitative comparisons using various performance measures were conducted, and qualitative performance evaluations for ex vivo breast specimen were also performed by comparing the ground-truth malignant tissue areas identified by radiologists.Results: Both simulation and real experiments confirmed that the optimized energy bins by the proposed method allow better material decomposition quality. Moreover, for the specimen thickness estimation errors up to 2 mm, the proposed method provides good reconstruction results in both simulation and real ex vivo breast phantom experiments compared to existing methods.Conclusions: The proposed statistical framework of PCXD has been successfully applied for the energy optimization and decomposition of three material in a mammographic environment. Experimental results using the physical breast phantom and ex vivo specimen support the practicality of the proposed algorithm.

  6. Ultrasound elastography using empirical mode decomposition analysis.

    PubMed

    Sadeghi, Sajjad; Behnam, Hamid; Tavakkoli, Jahan

    2014-01-01

    Ultrasound elastography is a non-invasive method which images the elasticity of soft-tissues. To make an image, pre and after a small compression, ultrasound radio frequency (RF) signals are acquired and the time delays between them are estimated. The first differentiation of displacement estimations is called elastogram. In this study, we are going to make an elastogram using the processing method named empirical mode decomposition (EMD). EMD is an analytic technique which decomposes a complicated signal to a collection of simple signals called intrinsic mode functions (IMFs). The idea of paper is using these IMFs instead of primary RF signals. To implement the algorithms two different datasets selected. The first one was data from a sandwich structure of normal and cooked tissue. The second dataset consisted of around 180 frames acquired from a malignant breast tumor. For displacement estimating, two different methods, cross-correlation and wavelet transform, were used too and for evaluating the quality, two conventional parameters, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) calculated for each image. Results show that in both methods after using EMD the quality improves. In first dataset and cross correlation technique CNR and SNR improve about 16 dB and 9 dB respectively. In same dataset by using wavelet technique, the parameters show 14 dB and 10 dB improvement respectively. In second dataset (breast tumor data) CNR and SNR in cross correlation method improve 18 dB and 7 dB and in wavelet technique improve 17 dB and 6 dB respectively. PMID:24696805

  7. Thermal decomposition of magnesium and calcium sulfates

    SciTech Connect

    Roche, S L

    1982-04-01

    The effect of catalyst on the thermal decomposition of MgSO/sub 4/ and CaSO/sub 4/ in vacuum was studied as a function of time in Knudsen cells and for MgSO/sub 4/, in open crucibles in vacuum in a Thermal Gravimetric Apparatus. Platinum and Fe/sub 2/O/sub 3/ were used as catalysts. The CaSO/sub 4/ decomposition rate was approximately doubled when Fe/sub 2/O/sub 3/ was present in a Knudsen cell. Platinum did not catalyze the CaSO/sub 4/ decomposition reaction. The initial decomposition rate for MgSO/sub 4/ was approximately 5 times greater than when additives were present in Knudsen cells but only about 1.5 times greater when decomposition was done in an open crucible.

  8. Canonical Huynen decomposition of radar targets

    NASA Astrophysics Data System (ADS)

    Li, Dong; Zhang, Yunhua

    2015-10-01

    Huynen decomposition prefers the world of basic symmetry and regularity (SR) in which we live. However, this preference restricts its applicability to ideal SR scatterer only. As for the complex non-symmetric (NS) and irregular (IR) scatterers such as forest and building, Huynen decomposition fails to analyze their scattering. The canonical Huynen dichotomy is devised to extend Huynen decomposition to the preferences for IR and NS. From the physical realizability conditions of polarimetric scattering description, two other dichotomies of polarimetric radar target are developed, which prefer scattering IR, and NS, respectively, and provide two competent supplements to Huynen decomposition. The canonical Huynen dichotomy is the combination of the two dichotomies and Huynen decomposition. In virtue of an Adaptive selection, the canonical Huynen dichotomy is used in target extraction, and the experiments on AIRSAR San Francisco data demonstrate its high efficiency and excellent discrimination of radar targets.

  9. Multilinear operators for higher-order decompositions.

    SciTech Connect

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  10. Mode Decomposition Methods for Soil Moisture Prediction

    NASA Astrophysics Data System (ADS)

    Jana, R. B.; Efendiev, Y. R.; Mohanty, B.

    2014-12-01

    Lack of reliable, well-distributed, long-term datasets for model validation is a bottle-neck for most exercises in soil moisture analysis and prediction. Understanding what factors drive soil hydrological processes at different scales and their variability is very critical to further our ability to model the various components of the hydrologic cycle more accurately. For this, a comprehensive dataset with measurements across scales is very necessary. Intensive fine-resolution sampling of soil moisture over extended periods of time is financially and logistically prohibitive. Installation of a few long term monitoring stations is also expensive, and needs to be situated at critical locations. The concept of Time Stable Locations has been in use for some time now to find locations that reflect the mean values for the soil moisture across the watershed under all wetness conditions. However, the soil moisture variability across the watershed is lost when measuring at only time stable locations. We present here a study using techniques such as Dynamic Mode Decomposition (DMD) and Discrete Empirical Interpolation Method (DEIM) that extends the concept of time stable locations to arrive at locations that provide not simply the average soil moisture values for the watershed, but also those that can help re-capture the dynamics across all locations in the watershed. As with the time stability, the initial analysis is dependent on an intensive sampling history. The DMD/DEIM method is an application of model reduction techniques for non-linearly related measurements. Using this technique, we are able to determine the number of sampling points that would be required for a given accuracy of prediction across the watershed, and the location of those points. Locations with higher energetics in the basis domain are chosen first. We present case studies across watersheds in the US and India. The technique can be applied to other hydro-climates easily.

  11. Management intensity alters decomposition via biological pathways

    USGS Publications Warehouse

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future

  12. Fast Optimal Load Balancing Algorithms for 1D Partitioning

    SciTech Connect

    Pinar, Ali; Aykanat, Cevdet

    2002-12-09

    One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.

  13. Iterative most likely oriented point registration.

    PubMed

    Billings, Seth; Taylor, Russell

    2014-01-01

    A new algorithm for model based registration is presented that optimizes both position and surface normal information of the shapes being registered. This algorithm extends the popular Iterative Closest Point (ICP) algorithm by incorporating the surface orientation at each point into both the correspondence and registration phases of the algorithm. For the correspondence phase an efficient search strategy is derived which computes the most probable correspondences considering both position and orientation differences in the match. For the registration phase an efficient, closed-form solution provides the maximum likelihood rigid body alignment between the oriented point matches. Experiments by simulation using human femur data demonstrate that the proposed Iterative Most Likely Oriented Point (IMLOP) algorithm has a strong accuracy advantage over ICP and has increased ability to robustly identify a successful registration result. PMID:25333116

  14. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  15. Systolic algorithms and their implementation

    SciTech Connect

    Kung, H.T.

    1984-01-01

    Very high performance computer systems must rely heavily on parallelism since there are severe physical and technological limits on the ultimate speed of any single processor. The systolic array concept developed in the last several years allows effective use of a very large number of processors in parallel. This article illustrates the basic ideas by reviewing a systolic array design for matrix triangularization and describing its use in the on-the-fly updating of Cholesky decomposition of covariance matrices-a crucial computation in adaptive signal processing. Following this are discussions on issues related to the hardware implementation of systolic algorithms in general, and some guidelines for designing systolic algorithms that will be convenient for implementation. 33 references.

  16. Point-based manifold harmonics.

    PubMed

    Liu, Yang; Prabhakaran, Balakrishnan; Guo, Xiaohu

    2012-10-01

    This paper proposes an algorithm to build a set of orthogonal Point-Based Manifold Harmonic Bases (PB-MHB) for spectral analysis over point-sampled manifold surfaces. To ensure that PB-MHB are orthogonal to each other, it is necessary to have symmetrizable discrete Laplace-Beltrami Operator (LBO) over the surfaces. Existing converging discrete LBO for point clouds, as proposed by Belkin et al., is not guaranteed to be symmetrizable. We build a new point-wisely discrete LBO over the point-sampled surface that is guaranteed to be symmetrizable, and prove its convergence. By solving the eigen problem related to the new operator, we define a set of orthogonal bases over the point cloud. Experiments show that the new operator is converging better than other symmetrizable discrete Laplacian operators (such as graph Laplacian) defined on point-sampled surfaces, and can provide orthogonal bases for further spectral geometric analysis and processing tasks. PMID:22879345

  17. New iterative gridding algorithm using conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Jiang, Xuguang; Thedens, Daniel

    2004-05-01

    Non-uniformly sampled data in MRI applications must be interpolated onto a regular Cartesian grid to perform fast image reconstruction using FFT. The conventional method for this is gridding, which requires a density compensation function (DCF). The calculation of DCF may be time-consuming, ambiguously defined, and may not be always reusable due to changes in k-space trajectories. A recently proposed reconstruction method that eliminates the requirement of DCF is block uniform resampling (BURS) which uses singular value decomposition (SVD). However, the SVD is still computationally intensive. In this work, we present a modified BURS algorithm using conjugate gradient method (CGM) in place of direct SVD calculation. Calculation of a block of grid point values in each iteration further reduces the computational load. The new method reduces the calculation complexity while maintaining a high-quality reconstruction result. For an n-by-n matrix, the time complexity per iteration is reduced from O(n*n*n) in SVD to O(n*n) in CGM. The time can be further reduced when we stop the iteration in CGM earlier according to the norm of the residual vector. Using this method, the quality of the reconstructed image improves compared to regularized BURS. The reduced time complexity and improved reconstruction result make the new algorithm promising in dealing with large-sized images and 3D images.

  18. Ground state for CH2 and symmetry for methane decomposition

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Luo, Wen-Lang; Ruan, Wen; Jiang, Gang; Zhu, Zheng-He

    2008-06-01

    Using the different level of methods B3P86, BLYP, B3PW91, HF, QCISD, CASSCF (4,4) and MP2 with the various basis functions 6-311G**, D95, cc-pVTZ and DGDZVP, the calculations of this paper confirm that the ground state is tilde X3B1 with C2v group for CH2. Furthermore, the three kinds of theoretical methods, i.e. B3P86, CCSD(T, MP4) and G2 with the same basis set cc-pVTZ only are used to recalculate the zero-point energy revision which are modified by scaling factor 0.989 for the high level based on the virial theorem, and also with the correction for basis set superposition error. These results are also contrary to tilde X3Σ-g for the ground state of CH2 in reference. Based on the atomic and molecular reaction statics, this paper proves that the decomposition type (1) i.e. CH4 → CH2+H2, is forbidden and the decomposition type (2) i.e. CH4 → CH3+H is allowed for CH4. This is similar to the decomposition of SiH4.

  19. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  20. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545