Science.gov

Sample records for point decomposition algorithm

  1. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  2. Finding corner point correspondence from wavelet decomposition of image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; LeMoigne, Jacqueline

    1997-01-01

    A time efficient algorithm for image registration between two images that differ in translation is discussed. The algorithm is based on coarse-fine strategy using wavelet decomposition of both the images. The wavelet decomposition serves two different purposes: (1) its high frequency components are used to detect feature points (corner points here) and (2) it provides coarse-to-fine structure for making the algorithm time efficient. The algorithm is based on detecting the corner points from one of the images called reference image and computing corresponding points from the other image called test image by using local correlations using 7x7 windows centered around the corner points. The corresponding points are detected at the lowest decomposition level in a search area of about 11x11 (depending on the translation) and potential points of correspondence are projected onto higher levels. In the subsequent levels the local correlations are computed in a search area of no more than 3x3 for refinement of the correspondence.

  3. Faster Algorithms on Branch and Clique Decompositions

    NASA Astrophysics Data System (ADS)

    Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin

    We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.

  4. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  5. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  6. A convergent hybrid decomposition algorithm model for SVM training.

    PubMed

    Lucidi, Stefano; Palagi, Laura; Risi, Arnaldo; Sciandrone, Marco

    2009-06-01

    Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach. PMID:19435679

  7. Domain decomposition algorithms and computation fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.

  8. Nonparametric decomposition of quasi-periodic time series for change-point detection

    NASA Astrophysics Data System (ADS)

    Artemov, Alexey; Burnaev, Evgeny; Lokot, Andrey

    2015-12-01

    The paper is concerned with the sequential online change-point detection problem for a dynamical system driven by a quasiperiodic stochastic process. We propose a multicomponent time series model and an effective online decomposition algorithm to approximate the components of the models. Assuming the stationarity of the obtained components, we approach the change-point detection problem on a per-component basis and propose two online change-point detection schemes corresponding to two real-world scenarios. Experimental results for decomposition and detection algorithms for synthesized and real-world datasets are provided to demonstrate the efficiency of our change-point detection framework.

  9. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  10. The Empirical Mode Decomposition algorithm via Fast Fourier Transform

    NASA Astrophysics Data System (ADS)

    Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Artemyev, Dmitry N.; Khramov, Alexander G.

    2014-09-01

    In this paper we consider a problem of implementing a fast algorithm for the Empirical Mode Decomposition (EMD). EMD is one of the newest methods for decomposition of non-linear and non-stationary signals. A basis of EMD is formed "on-the-fly", i.e. it depends from a distribution of the signal and not given a priori in contrast on cases Fourier Transform (FT) or Wavelet Transform (WT). The EMD requires interpolating of local extrema sets of signal to find upper and lower envelopes. The data interpolation on an irregular lattice is a very low-performance procedure. A classical description of EMD by Huang suggests doing this through splines, i.e. through solving of a system of equations. Existence of a fast algorithm is the main advantage of the FT. A simple description of an algorithm in terms of Fast Fourier Transform (FFT) is a standard practice to reduce operation's count. We offer a fast implementation of EMD (FEMD) through FFT and some other cost-efficient algorithms. Basic two-stage interpolation algorithm for EMD is composed of a Upscale procedure through FFT and Downscale procedure through a selection procedure for signal's points. First we consider the local maxima (or minima) set without reference to the axis OX, i.e. on a regular lattice. The Upscale through the FFT change the signal's length to the Least Common Multiple (LCM) value of all distances between neighboring extremes on the axis OX. If the LCM value is too large then it is necessary to limit local set of extrema. In this case it is an analog of the spline interpolation. A demo for FEMD in noise reduction task for OCT has been shown.

  11. An accurate product SVD (singular value decomposition) algorithm

    SciTech Connect

    Bojanczyk, A.W.; Luk, F.T. . School of Electrical Engineering); Ewerbring, M. ); Van Dooren, P. )

    1990-01-01

    In this paper, we propose a new algorithm for computing a singular value decomposition of a product of three matrices. We show that our algorithm is numerically desirable in that all relevant residual elements will be numerically small. 12 refs., 1 tab.

  12. Enhanced decomposition algorithm for multistage stochastic hydroelectric scheduling. Technical report

    SciTech Connect

    Morton, D.P.

    1994-01-01

    Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.

  13. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGESBeta

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; Wu, Kun -Lung; Catalyurek, Umit V.

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  14. Efficient variants of the vertex space domain decomposition algorithm

    SciTech Connect

    Chan, T.F.; Shao, J.P. . Dept. of Mathematics); Mathew, T.P. . Dept. of Mathematics)

    1994-11-01

    Several variants of the vertex space algorithm of Smith for two-dimensional elliptic problems are described. The vertex space algorithm is a domain decomposition method based on nonoverlapping subregions, in which the reduced Schur complement system on the interface is solved using a generalized block Jacobi-type preconditioner, with the blocks corresponding to the vertex space, edges, and a coarse grid. Two kinds of approximations are considered for the edge and vertex space subblocks, one based on Fourier approximation, and another based on an algebraic probing technique in which sparse approximations to these subblocks are computed. The motivation is to improve the efficiency of the algorithm without sacrificing the optimal convergence rate. Numerical and theoretical results on the performance of these algorithms, including variants of an algorithm of Bramble, Pasciak, and Schatz are presented.

  15. A point matching algorithm based on reference point pair

    NASA Astrophysics Data System (ADS)

    Zou, Huanxin; Zhu, Youqing; Zhou, Shilin; Lei, Lin

    2016-03-01

    Outliers and occlusions are important degradation in the real application of point matching. In this paper, a novel point matching algorithm based on the reference point pairs is proposed. In each iteration, it firstly eliminates the dubious matches to obtain the relatively accurate matching points (reference point pairs), and then calculates the shape contexts of the removed points with reference to them. After re-matching the removed points, the reference point pairs are combined to achieve better correspondences. Experiments on synthetic data validate the advantages of our method in comparison with some classical methods.

  16. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  17. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  18. Implementation and performance of a domain decomposition algorithm in Sisal

    SciTech Connect

    DeBoni, T.; Feo, J.; Rodrigue, G.; Muller, J.

    1993-09-23

    Sisal is a general-purpose functional language that hides the complexity of parallel processing, expedites parallel program development, and guarantees determinacy. Parallelism and management of concurrent tasks are realized automatically by the compiler and runtime system. Spatial domain decomposition is a widely-used method that focuses computational resources on the most active, or important, areas of a domain. Many complex programming issues are introduced in paralleling this method including: dynamic spatial refinement, dynamic grid partitioning and fusion, task distribution, data distribution, and load balancing. In this paper, we describe a spatial domain decomposition algorithm programmed in Sisal. We explain the compilation process, and present the execution performance of the resultant code on two different multiprocessor systems: a multiprocessor vector supercomputer, and cache-coherent scalar multiprocessor.

  19. Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm

    SciTech Connect

    Yao, Y

    2008-02-08

    Semantic graphs have become key components in analyzing complex systems such as the Internet, or biological and social networks. These types of graphs generally consist of sparsely connected clusters or 'communities' whose nodes are more densely connected to each other than to other nodes in the graph. The identification of these communities is invaluable in facilitating the visualization, understanding, and analysis of large graphs by producing subgraphs of related data whose interrelationships can be readily characterized. Unfortunately, the ability of LLNL to effectively analyze the terabytes of multisource data at its disposal has remained elusive, since existing decomposition algorithms become computationally prohibitive for graphs of this size. We have addressed this limitation by developing more efficient algorithms for discerning community structure that can effectively process massive graphs. Current algorithms for detecting community structure, such as the high quality algorithm developed by Girvan and Newman [1], are only capable of processing relatively small graphs. The cubic complexity of Girvan and Newman, for example, makes it impractical for graphs with more than approximately 10{sup 4} nodes. Our goal for this project was to develop methodologies and corresponding algorithms capable of effectively processing graphs with up to 10{sup 9} nodes. From a practical standpoint, we expect the developed scalable algorithms to help resolve a variety of operational issues associated with the productive use of semantic graphs at LLNL. During FY07, we completed a graph clustering implementation that leverages a dynamic graph transformation to more efficiently decompose large graphs. In essence, our approach dynamically transforms the graph (or subgraphs) into a tree structure consisting of biconnected components interconnected by bridge links. This isomorphism allows us to compute edge betweenness, the chief source of inefficiency in Girvan and Newman

  20. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Weerapurage, Dinesh P; Sullivan, Blair D; Groer, Christopher S

    2013-01-01

    Although many NP-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of required dynamic programming tables and excessive running times of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree-decomposition based approach to solve maximum weighted independent set. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  1. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  2. Singular value decomposition utilizing parallel algorithms on graphical processors

    SciTech Connect

    Kotas, Charlotte W; Barhen, Jacob

    2011-01-01

    One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements for a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder

  3. On the equivalence of a class of inverse decomposition algorithms for solving systems of linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A class of direct inverse decomposition algorithms for solving systems of linear equations is presented. Their behavior in the presence of round-off errors is analyzed. It is shown that under some mild restrictions on their implementation, the class of direct inverse decomposition algorithms presented are equivalent in terms of the error complexity measures.

  4. A one-sided Jacobi algorithm for computing the singular value decomposition on a vector computer

    SciTech Connect

    De Rijk, P.P.M. )

    1989-03-01

    An old algorithm for computing the singular value decomposition, which was first mentioned by Hestenes has gained renewed interest because of its properties of parallelism and vectorizability. Some computational modifications are given and a comparison with the well-known Golub-Reinsch algorithm is made. In this paper comparative experiments on CYBER 205 are reported.

  5. An optimal and efficient new gridding algorithm using singular value decomposition.

    PubMed

    Rosenfeld, D

    1998-07-01

    The problem of handling data that falls on a nonequally spaced grid occurs in numerous fields of science, ranging from radio-astronomy to medical imaging. In MRI, this condition arises when sampling under time-varying gradients in sequences such as echo-planar imaging (EPI), spiral scans, or radial scans. The technique currently being used to interpolate the nonuniform samples onto a Cartesian grid is called the gridding algorithm. In this paper, a new method for uniform resampling is presented that is both optimal and efficient. It is first shown that the resampling problem can be formulated as a problem of solving a set of linear equations Ax = b, where x and b are vectors of the uniform and nonuniform samples, respectively, and A is a matrix of the sinc interpolation coefficients. In a procedure called Uniform Re-Sampling (URS), this set of equations is given an optimal solution using the pseudoinverse matrix which is computed using singular value decomposition (SVD). In large problems, this solution is neither practical nor computationally efficient. Another method is presented, called the Block Uniform Re-Sampling (BURS) algorithm, which decomposes the problem into solving a small set of linear equations for each uniform grid point. These equations are a subset of the original equations Ax = b and are once again solved using SVD. The final result is both optimal and computationally efficient. The results of the new method are compared with those obtained using the conventional gridding algorithm via simulations. PMID:9660548

  6. Chaotic Visual Cryptosystem Using Empirical Mode Decomposition Algorithm for Clinical EEG Signals.

    PubMed

    Lin, Chin-Feng

    2016-03-01

    This paper, proposes a chaotic visual cryptosystem using an empirical mode decomposition (EMD) algorithm for clinical electroencephalography (EEG) signals. The basic design concept is to integrate two-dimensional (2D) chaos-based encryption scramblers, the EMD algorithm, and a 2D block interleaver method to achieve a robust and unpredictable visual encryption mechanism. Energy-intrinsic mode function (IMF) distribution features of the clinical EEG signal are developed for chaotic encryption parameters. The maximum and second maximum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the starting points of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. The minimum and second minimum energy ratios of the IMFs of a clinical EEG signal to its refereed total energy are used for the security level parameters of chaotic logistic map types of encrypted chaotic signals in the x and y vectors, respectively. Three EEG database, and seventeen clinical EEG signals were tested, and the average r and mse values are 0.0201 and 4.2626 × 10(-29), respectively, for the original and chaotically-encrypted through EMD clinical EEG signals. The chaotically-encrypted signal cannot be recovered if there is an error in the input parameters, for example, an initial point error of 0.000001 %. The encryption effects of the proposed chaotic EMD visual encryption mechanism are excellent. PMID:26645316

  7. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  8. Improved MCA-TV algorithm for interference hyperspectral image decomposition

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Zhao, Junsuo; Cailing, Wang

    2015-12-01

    The technology of interference hyperspectral imaging, which can get the spectral and spatial information of the observed targets, is a very powerful technology in the field of remote sensing. Due to the special imaging principle, there are many position-fixed interference fringes in each frame of the interference hyperspectral image (IHI) data. This characteristic will affect the result of compressed sensing theory and traditional compression algorithms used on IHI data. According to this characteristic of the IHI data, morphological component analysis (MCA) is adopted to separate the interference fringes layers and the background layers of the LSMIS (Large Spatially Modulated Interference Spectral Image) data, and an improved MCA and Total Variation (TV) combined algorithm is proposed in this paper. An update mode of the threshold in traditional MCA is proposed, and the traditional TV algorithm is also improved according to the unidirectional characteristic of the interference fringes in IHI data. The experimental results prove that the proposed improved MCA-TV (IMT) algorithm can get better results than the traditional MCA, and also can meet the convergence conditions much faster than the traditional MCA.

  9. Multiobjective biogeography based optimization algorithm with decomposition for community detection in dynamic networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Li, Bin; Sun, Geng

    2015-10-01

    Identifying community structures in static network misses the opportunity to capture the evolutionary patterns. So community detection in dynamic network has attracted many researchers. In this paper, a multiobjective biogeography based optimization algorithm with decomposition (MBBOD) is proposed to solve community detection problem in dynamic networks. In the proposed algorithm, the decomposition mechanism is adopted to optimize two evaluation objectives named modularity and normalized mutual information simultaneously, which measure the quality of the community partitions and temporal cost respectively. A novel sorting strategy for multiobjective biogeography based optimization is presented for comparing quality of habitats to get species counts. In addition, problem-specific migration and mutation model are introduced to improve the effectiveness of the new algorithm. Experimental results both on synthetic and real networks demonstrate that our algorithm is effective and promising, and it can detect communities more accurately in dynamic networks compared with DYNMOGA and FaceNet.

  10. Automated decomposition algorithm for Raman spectra based on a Voigt line profile model.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2016-05-20

    Raman spectra measured by spectrometers usually suffer from band overlap and random noise. In this paper, an automated decomposition algorithm based on a Voigt line profile model for Raman spectra is proposed to solve this problem. To decompose a measured Raman spectrum, a Voigt line profile model is introduced to parameterize the measured spectrum, and a Gaussian function is used as the instrumental broadening function. Hence, the issue of spectral decomposition is transformed into a multiparameter optimization problem of the Voigt line profile model parameters. The algorithm can eliminate instrumental broadening, obtain a recovered Raman spectrum, resolve overlapping bands, and suppress random noise simultaneously. Moreover, the recovered spectrum can be decomposed to a group of Lorentzian functions. Experimental results on simulated Raman spectra show that the performance of this algorithm is much better than a commonly used blind deconvolution method. The algorithm has also been tested on the industrial Raman spectra of ortho-xylene and proved to be effective. PMID:27411136

  11. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    SciTech Connect

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array. Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.

  12. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  13. A domain decomposition algorithm for solving large elliptic problems

    SciTech Connect

    Nolan, M.P.

    1991-01-01

    AN algorithm which efficiently solves large systems of equations arising from the discretization of a single second-order elliptic partial differential equation is discussed. The global domain is partitioned into not necessarily disjoint subdomains which are traversed using the Schwarz Alternating Procedure. On each subdomain the multigrid method is used to advance the solution. The algorithm has the potential to decrease solution time when data is stored across multiple levels of a memory hierarchy. Results are presented for a virtual memory, vector multiprocessor architecture. A study of choice of inner iteration procedure and subdomain overlap is presented for a model problem, solved with two and four subdomains, sequentially and in parallel. Microtasking multiprocessing results are reported for multigrid on the Alliant FX-8 vector-multiprocessor. A convergence proof for a class of matrix splittings for the two-dimensional Helmholtz equation is given. 70 refs., 3 figs., 20 tabs.

  14. Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data

    PubMed Central

    Clark, Darin P.; Badea, Cristian T.

    2014-01-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  15. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  16. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data

    NASA Astrophysics Data System (ADS)

    Clark, Darin P.; Badea, Cristian T.

    2014-10-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL-1), gold (0.9 mg mL-1), and gadolinium (2.9 mg mL-1) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  17. Determination of the Thermal Decomposition Products of Terephthalic Acid by Using Curie-Point Pyrolyzer

    NASA Astrophysics Data System (ADS)

    Begüm Elmas Kimyonok, A.; Ulutürk, Mehmet

    2016-04-01

    The thermal decomposition behavior of terephthalic acid (TA) was investigated by thermogravimetry/differential thermal analysis (TG/DTA) and Curie-point pyrolysis. TG/DTA analysis showed that TA is sublimed at 276°C prior to decomposition. Pyrolysis studies were carried out at various temperatures ranging from 160 to 764°C. Decomposition products were analyzed and their structures were determined by gas chromatography-mass spectrometry (GC-MS). A total of 11 degradation products were identified at 764°C, whereas no peak was observed below 445°C. Benzene, benzoic acid, and 1,1‧-biphenyl were identified as the major decomposition products, and other degradation products such as toluene, benzophenone, diphenylmethane, styrene, benzaldehyde, phenol, 9H-fluorene, and 9-phenyl 9H-fluorene were also detected. A pyrolysis mechanism was proposed based on the findings.

  18. Decomposition

    USGS Publications Warehouse

    Middleton, Beth A.

    2014-01-01

    A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.

  19. Sweeping algorithms for five-point stencils and banded matrices

    SciTech Connect

    Kwong, Man Kam.

    1992-06-01

    We record MATLAB experiments implementing the sweeping algorithms we proposed recently to solve five-point stencils arising from the discretization of partial differential equations, notably the Ginzburg-Landau equations from the theory of superconductivity. Algorithms tested include two-direction, multistage, and partial sweeping.

  20. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  1. Combining DC algorithms (DCAs) and decomposition techniques for the training of nonpositive-semidefinite kernels.

    PubMed

    Akoa, François Bertrand

    2008-11-01

    Today, decomposition methods are one of the most popular methods for training support vector machines (SVMs). With the use of kernels that do not satisfy Mercer's condition, new techniques must be designed to handle nonpositive-semidefinite kernels resulting to this choice. In this work we incorporate difference of convex (DC functions) optimization techniques into decomposition methods to tackle this difficulty. The new approach needs no problem modification and we show that the only use of a truncated DC algorithms (DCAs) in the decomposition scheme produces a sufficient decrease of the objective function at each iteration. Thanks to this property, an asymptotic convergence proof of the new algorithm is produced without any blockwise convexity assumption on the objective function. We also investigate a working set selection rule using second-order information for sequential minimal optimization (SMO)-type decomposition in the spirit of DC optimization. Numerical results show the robustness and the efficiency of the new methods compared with state-of-the-art software. PMID:18990641

  2. Automatic outlier suppression for rigid coherent point drift algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Songlin; Tu, Ruibin; Niu, Zhaodong; Li, Na; Chen, Zengping

    2014-10-01

    Point pattern matching (PPM) including the hard assignment and soft assignment approaches has attracted much attention. The typical probability based method is Coherent Point Drift (CPD) algorithm, which treats one point set(named model point set) as centroids of Gaussian mixture model, and then fits it to the other(named target point set). It uses the expectation maximization (EM) framework, where the point correspondences and transformation parameters are updated alternately. But the anti-outlier performance of CPD is not robust enough as outliers have always been involved in operation until CPD converges. So we proposed an automatic outlier suppression mechanism (AOS) to overcome the shortages of CPD. Firstly, inliers or outliers are judged by converting matching probability matrix into doubly stochastic matrix. Then, transformation parameters are fitted using accurate matching point sets. Finally, the model point set is forced to move coherently to target point set by this transformation model. The transformed model point set is imported into EM iteration again and the cycle repeats itself. The iteration finishes when matching probability matrix converges or the cardinality of accurate matching point set reaches maximum. Besides, the covariance should be updated by the newest position error before re-entering EM algorithm. The experimental results based on both synthetic and real data indicate that compared with other algorithms, AOS-CPD is more robust and efficient. It offers a good practicability and accuracy in rigid PPM applications.

  3. Monte Carlo algorithm for least dependent non-negative mixture decomposition.

    PubMed

    Astakhov, Sergey A; Stögbauer, Harald; Kraskov, Alexander; Grassberger, Peter

    2006-03-01

    We propose a simulated annealing algorithm (stochastic non-negative independent component analysis, SNICA) for blind decomposition of linear mixtures of non-negative sources with non-negative coefficients. The demixing is based on a Metropolis-type Monte Carlo search for least dependent components, with the mutual information between recovered components as a cost function and their non-negativity as a hard constraint. Elementary moves are shears in two-dimensional subspaces and rotations in three-dimensional subspaces. The algorithm is geared at decomposing signals whose probability densities peak at zero, the case typical in analytical spectroscopy and multivariate curve resolution. The decomposition performance on large samples of synthetic mixtures and experimental data is much better than that of traditional blind source separation methods based on principal component analysis (MILCA, FastICA, RADICAL) and chemometrics techniques (SIMPLISMA, ALS, BTEM). PMID:16503615

  4. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    SciTech Connect

    Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks.

  5. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  6. Decomposition-Based Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Social Networks

    PubMed Central

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806

  7. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  8. Polynomial interior-point algorithms for horizontal linear complementarity problem

    NASA Astrophysics Data System (ADS)

    Wang, G. Q.; Bai, Y. Q.

    2009-11-01

    In this paper a class of polynomial interior-point algorithms for horizontal linear complementarity problem based on a new parametric kernel function, with parameters p[set membership, variant][0,1] and [sigma]>=1, are presented. The proposed parametric kernel function is not exponentially convex and also not strongly convex like the usual kernel functions, and has a finite value at the boundary of the feasible region. It is used both for determining the search directions and for measuring the distance between the given iterate and the [mu]-center for the algorithm. The currently best known iteration bounds for the algorithm with large- and small-update methods are derived, namely, and , respectively, which reduce the gap between the practical behavior of the algorithms and their theoretical performance results. Numerical tests demonstrate the behavior of the algorithms for different results of the parameters p,[sigma] and [theta].

  9. A Random Algorithm for Low-Rank Decomposition of Large-Scale Matrices With Missing Entries.

    PubMed

    Liu, Yiguang; Lei, Yinjie; Li, Chunguang; Xu, Wenzheng; Pu, Yifei

    2015-11-01

    A random submatrix method (RSM) is proposed to calculate the low-rank decomposition U(m×r)V(n×r)(T) (r < m, n) of the matrix Y∈R(m×n) (assuming m > n generally) with known entry percentage 0 < ρ ≤ 1. RSM is very fast as only O(mr(2)ρ(r)) or O(n(3)ρ(3r)) floating-point operations (flops) are required, compared favorably with O(mnr+r(2)(m+n)) flops required by the state-of-the-art algorithms. Meanwhile, RSM has the advantage of a small memory requirement as only max(n(2),mr+nr) real values need to be saved. With the assumption that known entries are uniformly distributed in Y, submatrices formed by known entries are randomly selected from Y with statistical size k×nρ(k) or mρ(l)×l , where k or l takes r+1 usually. We propose and prove a theorem, under random noises the probability that the subspace associated with a smaller singular value will turn into the space associated to anyone of the r largest singular values is smaller. Based on the theorem, the nρ(k)-k null vectors or the l-r right singular vectors associated with the minor singular values are calculated for each submatrix. The vectors ought to be the null vectors of the submatrix formed by the chosen nρ(k) or l columns of the ground truth of V(T). If enough submatrices are randomly chosen, V and U can be estimated accordingly. The experimental results on random synthetic matrices with sizes such as 13 1072 ×10(24) and on real data sets such as dinosaur indicate that RSM is 4.30 ∼ 197.95 times faster than the state-of-the-art algorithms. It, meanwhile, has considerable high precision achieving or approximating to the best. PMID:26208344

  10. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  11. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    PubMed

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  12. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  13. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  14. Optimizing the decomposition of soil moisture time-series data using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Kulkarni, C.; Mengshoel, O. J.; Basak, A.; Schmidt, K. M.

    2015-12-01

    The task of determining near-surface volumetric water content (VWC), using commonly available dielectric sensors (based upon capacitance or frequency domain technology), is made challenging due to the presence of "noise" such as temperature-driven diurnal variations in the recorded data. We analyzed a post-wildfire rainfall and runoff monitoring dataset for hazard studies in Southern California. VWC was measured with EC-5 sensors manufactured by Decagon Devices. Many traditional signal smoothing techniques such as moving averages, splines, and Loess smoothing exist. Unfortunately, when applied to our post-wildfire dataset, these techniques diminish maxima, introduce time shifts, and diminish signal details. A promising seasonal trend-decomposition procedure based on Loess (STL) decomposes VWC time series into trend, seasonality, and remainder components. Unfortunately, STL with its default parameters produces similar results as previously mentioned smoothing methods. We propose a novel method to optimize seasonal decomposition using STL with genetic algorithms. This method successfully reduces "noise" including diurnal variations while preserving maxima, minima, and signal detail. Better decomposition results for the post-wildfire VWC dataset were achieved by optimizing STL's control parameters using genetic algorithms. The genetic algorithms minimize an additive objective function with three weighted terms: (i) root mean squared error (RMSE) of straight line relative to STL trend line; (ii) range of STL remainder; and (iii) variance of STL remainder. Our optimized STL method, combining trend and remainder, provides an improved representation of signal details by preserving maxima and minima as compared to the traditional smoothing techniques for the post-wildfire rainfall and runoff monitoring data. This method identifies short- and long-term VWC seasonality and provides trend and remainder data suitable for forecasting VWC in response to precipitation.

  15. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-04-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space

  16. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space

  17. Communication: Active space decomposition with multiple sites: Density matrix renormalization group algorithm

    SciTech Connect

    Parker, Shane M.; Shiozaki, Toru

    2014-12-07

    We extend the active space decomposition method, recently developed by us, to more than two active sites using the density matrix renormalization group algorithm. The fragment wave functions are described by complete or restricted active-space wave functions. Numerical results are shown on a benzene pentamer and a perylene diimide trimer. It is found that the truncation errors in our method decrease almost exponentially with respect to the number of renormalization states M, allowing for numerically exact calculations (to a few μE{sub h} or less) with M = 128 in both cases. This rapid convergence is because the renormalization steps are used only for the interfragment electron correlation.

  18. Parrallel Implementation of Fast Randomized Algorithms for Low Rank Matrix Decomposition

    SciTech Connect

    Lucas, Andrew J.; Stalizer, Mark; Feo, John T.

    2014-03-01

    We analyze the parallel performance of randomized interpolative decomposition by de- composing low rank complex-valued Gaussian random matrices larger than 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested.

  19. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems

  20. Parallel two-level domain decomposition based Jacobi-Davidson algorithms for pyramidal quantum dot simulation

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan

    2016-07-01

    We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.

  1. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  2. Object aggregation using merge-at-a-point algorithm

    NASA Astrophysics Data System (ADS)

    Salaria, Kanupriya; Darsono, Wiriyanto; Hinman, Michael; Linderman, Mark; Bai, Li

    2004-04-01

    This paper describes a novel technique to detect military convoy"s moving patterns using the Ground Moving Target Indicator (GMTI) data. The specific pattern studied here is the moving vehicle groups that are merging onto a prescribed location. The algorithm can be used to detect the military convoy"s identity so that the situation can be assessed to prevent hostile enemy military advancements. The technique uses the minimum error solution (MES) to predict the point of intersection of vehicle tracks. Comparing this point of intersection to the prescribed location it can be determined whether the vehicles are merging. Two tasks are performed to effectively determine the merged vehicle group patterns: 1) investigate necessary number of vehicles needed in the MES algorithms, and 2) analyze three decision rules for clustering the vehicle groups. The simulation has shown the accuracy (88.9% approx.) to detect the vehicle groups that merge at a prescribed location.

  3. Algorithm for astronomical, point source, signal to noise ratio calculations

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Schroeder, D. J.

    1984-01-01

    An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.

  4. Despeckling algorithm on ultrasonic image using adaptive block-based singular value decomposition

    NASA Astrophysics Data System (ADS)

    Sae-Bae, Napa; Udomhunsakul, Somkait

    2008-03-01

    Speckle noise reduction is an important technique to enhance the quality of ultrasonic image. In this paper, a despeckling algorithm based on an adaptive block-based singular value decomposition filtering (BSVD) applied on ultrasonic images is presented. Instead of applying BSVD directly to ultrasonic image, we propose to apply BSVD on the noisy edge image version obtained from the difference between the logarithmic transformations of the original image and blur image version of its. The recovered image is performed by combining the speckle noise-free edge image with blur image version of its. Finally, exponential transformation is applied in order to get the reconstructed image. To evaluate our algorithm compared with well-know algorithms such as Lee filter, Kuan filter, Homomorphic Wiener filter, median filter and wavelet soft thresholding, four image quality measurements, which are Mean Square Error (MSE), Signal to MSE (S/MSE), Edge preservation (β), and Correlation measurement (ρ), are used. From the results, it clearly shows that the proposed algorithm outperforms other methods in terms of quantitative and subjective assessments.

  5. Metabolic flux estimation--a self-adaptive evolutionary algorithm with singular value decomposition.

    PubMed

    Yang, Jing; Wongsa, Sarawan; Kadirkamanathan, Visakan; Billings, Stephen A; Wright, Phillip C

    2007-01-01

    Metabolic flux analysis is important for metabolic system regulation and intracellular pathway identification. A popular approach for intracellular flux estimation involves using 13C tracer experiments to label states that can be measured by nuclear magnetic resonance spectrometry or gas chromatography mass spectrometry. However, the bilinear balance equations derived from 13C tracer experiments and the noisy measurements require a nonlinear optimization approach to obtain the optimal solution. In this paper, the flux quantification problem is formulated as an error-minimization problem with equality and inequality constraints through the 13C balance and stoichiometric equations. The stoichiometric constraints are transformed to a null space by singular value decomposition. Self-adaptive evolutionary algorithms are then introduced for flux quantification. The performance of the evolutionary algorithm is compared with ordinary least squares estimation by the simulation of the central pentose phosphate pathway. The proposed algorithm is also applied to the central metabolism of Corynebacterium glutamicum under lysine-producing conditions. A comparison between the results from the proposed algorithm and data from the literature is given. The complexity of a metabolic system with bidirectional reactions is also investigated by analyzing the fluctuations in the flux estimates when available measurements are varied. PMID:17277420

  6. A 64-bit orthorectification algorithm using fixed-point arithmetic

    NASA Astrophysics Data System (ADS)

    French, Joseph C.; Balster, Eric J.; Turri, William F.

    2013-10-01

    As the cost of imaging systems have decreased, the quality and size has increased. This dynamic has made the practicality of many aerial imaging applications achievable such as cost line monitoring and vegetation indexing. Orthorectification is required for many of these applications; however, it is also expensive, computationally. The computational cost is due to oating point operations and divisions inherent in the orthorecti cation process. Two novel algorithm modi cations are proposed which signi cantly reduce the computational cost. The rst modi cation uses xed-point arithmetic in place of the oating point operations. The second replaces the division with a multiplication of the inverse. The result in an increase of 2x of the throughput while remaining within 15% of a pixel size in position.

  7. From Point Clouds to Architectural Models: Algorithms for Shape Reconstruction

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Falcolini, C.; Saccone, M.; Spadafora, G.

    2013-02-01

    The use of terrestrial laser scanners in architectural survey applications has become more and more common. Row data complexity, as given by scanner restitution, leads to several problems about design and 3D-modelling starting from Point Clouds. In this context we present a study on architectural sections and mathematical algorithms for their shape reconstruction, according to known or definite geometrical rules, focusing on shapes of different complexity. Each step of the semi-automatic algorithm has been developed using Mathematica software and CAD, integrating both programs in order to reconstruct a geometrical CAD model of the object. Our study is motivated by the fact that, for architectural survey, most of three dimensional modelling procedures concerning point clouds produce superabundant, but often unnecessary, information and are also very expensive in terms of cpu time using more and more sophisticated hardware and software. On the contrary, it's important to simplify/decimate the point cloud in order to recognize a particular form out of some definite geometric/architectonic shapes. Such a process consists of several steps: first the definition of plane sections and characterization of their architecture; secondly the construction of a continuous plane curve depending on some parameters. In the third step we allow the selection on the curve of some nodal points with given specific characteristics (symmetry, tangency conditions, shadowing exclusion, corners, … ). The fourth and last step is the construction of a best shape defined by the comparison with an abacus of known geometrical elements, such as moulding profiles, leading to a precise architectonical section. The algorithms have been developed and tested in very different situations and are presented in a case study of complex geometries such as some mouldings profiles in the Church of San Carlo alle Quattro Fontane.

  8. Secure 3D watermarking algorithm based on point set projection

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Zhang, Xiaomei

    2007-11-01

    3D digital models greatly facilitate the distribution and storage of information. While its copyright protection problems attract more and more research interests. A novel secure digital watermarking algorithm for 3D models is proposed in this paper. In order to survive most attacks like rotation, cropping, smoothing, adding noise, etc, the projection of the model's point set is chosen as the carrier of the watermark in the presented algorithm, in which contains the copyright information as logos, text, and so on. Then projection of the model's point set onto x, y and z plane are calculated respectively. Before watermark embedding process, the original watermark is scrambled by a key. Each projection is singular value decomposed, and the scrambled watermark is embedded into the SVD(singular value decomposed) domain of the above x, y and z plane respectively. After that we use the watermarked x, y and z plane to recover the vertices of the model and the watermarked model is attained. Only the legal user can remove the watermark from the watermarked models using the private key. Experiments are presented in the paper to show that the proposed algorithm has good performance on various malicious attacks.

  9. Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.

    2004-01-01

    This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.

  10. Point of Care and Factor Concentrate-Based Coagulation Algorithms

    PubMed Central

    Theusinger, Oliver M.; Stein, Philipp; Levy, Jerrold H.

    2015-01-01

    In the last years it has become evident that the use of blood products should be reduced whenever possible. There is increasing evidence regarding serious adverse events, including higher mortality and morbidity, related to transfusions. The use of point of care (POC) devices integrated in algorithms is one of the important mechanisms to limit blood product exposure. Any type of algorithm, especially the POC-based ones, allows goal-directed transfusions of blood products and even better targeted factor concentrate substitutions. Different types of algorithms in different surgical settings (cardiac surgery, trauma, liver surgery etc.) have been established with growing interest in their use as they offer objective therapy for management and reduction of blood product use. The use of POC devices with evidence-based algorithms is important in the bleeding patient independent of its origin (traumatic vs. surgical). The use of factor concentrates compared to the classical blood products can be cost-saving, beneficial for the patient, and in agreement with the WHO-requested standard of care. The empiric and uncontrolled use of blood products such as fresh frozen plasma, red blood cells, and platelets without POC monitoring should no longer be followed with regard to actual evidence in literature. Furthermore, the use of factor concentrates may provide better outcomes and potential for cost saving. PMID:26019707

  11. A propagating mode extraction algorithm for microwave waveguide using variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Yin, Aijun; Ren, Hongji

    2015-09-01

    One microwave propagating mode extraction algorithm is proposed for microwave waveguide using variational mode decomposition (VMD). The reflected signal acquired by the waveguide can be seen as the mixture of the propagating mode and evanescent modes. The propagating mode contains information regarding defects and evanescent modes can be treated as noise. By using VMD, the propagating mode can be extracted. Currently, decomposition models are mostly limited by lacking mathematical theory, backward error correction not being allowed in most methods due to the recursive sifting, or the inability to properly cope with noise. In VMD, the bands have been determined adaptively and the corresponding modes are estimated concurrently. An ensemble of modes are derived, and these modes collectively reproduce the input signal while each is being smoothed after demodulation into the baseband. This proposed model is particularly robust to sampling and noise. The bridge between the physical and mathematical models is demonstrated. A coated steel defect detection experiment is conducted using an X-band open-ended rectangular waveguide to evaluate the efficacy of the VMD method. Two samples are demonstrated. The steel with hole sample has a regular and clear defect, whereas the defect of steel with peening is fuzzy. For both samples, the VMD results can accurately identify the defects.

  12. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction

  13. Non-equilibrium molecular dynamics simulation of nanojet injection with adaptive-spatial decomposition parallel algorithm.

    PubMed

    Shin, Hyun-Ho; Yoon, Woong-Sup

    2008-07-01

    An Adaptive-Spatial Decomposition parallel algorithm was developed to increase computation efficiency for molecular dynamics simulations of nano-fluids. Injection of a liquid argon jet with a scale of 17.6 molecular diameters was investigated. A solid annular platinum injector was also solved simultaneously with the liquid injectant by adopting a solid modeling technique which incorporates phantom atoms. The viscous heat was naturally discharged through the solids so the liquid boiling problem was avoided with no separate use of temperature controlling methods. Parametric investigations of injection speed, wall temperature, and injector length were made. A sudden pressure drop at the orifice exit causes flash boiling of the liquid departing the nozzle exit with strong evaporation on the surface of the liquids, while rendering a slender jet. The elevation of the injection speed and the wall temperature causes an activation of the surface evaporation concurrent with reduction in the jet breakup length and the drop size. PMID:19051924

  14. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  15. New Advances in the Study of the Proximal Point Algorithm

    NASA Astrophysics Data System (ADS)

    Moroşanu, Gheorghe

    2010-09-01

    Consider in a real Hilbert space H the inexact, Halpern-type, proximal point algorithm xn+1 = αnu+(1-αn)Jβnxn+en, n = 0,1,…, (H—PPA) where u, x∈H are given points, Jβn = (I+βna) for a given maximal monotone operator A, and (en) is the error sequence, under new assumptions on αn∈(0,1) and βn∈(0,1). Several strong convergence results for the H—PPA are presented under the general condition that the error sequence converges strongly to zero, thus improving the classical Rockafellar's summability condition on (‖en‖) that has been extensively used so far for different versions of the proximal point algorithm. Our results extend and improve some recent ones. These results can be applied to approximate minimizers of convex functionals. Convergence rate estimates are established for a sequence approximating the minimum value of such a functional.

  16. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  17. MOEA/D-ACO: a multiobjective evolutionary algorithm using decomposition and AntColony.

    PubMed

    Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto

    2013-12-01

    Combining ant colony optimization (ACO) and the multiobjective evolutionary algorithm (EA) based on decomposition (MOEA/D), this paper proposes a multiobjective EA, i.e., MOEA/D-ACO. Following other MOEA/D-like algorithms, MOEA/D-ACO decomposes a multiobjective optimization problem into a number of single-objective optimization problems. Each ant (i.e., agent) is responsible for solving one subproblem. All the ants are divided into a few groups, and each ant has several neighboring ants. An ant group maintains a pheromone matrix, and an individual ant has a heuristic information matrix. During the search, each ant also records the best solution found so far for its subproblem. To construct a new solution, an ant combines information from its group's pheromone matrix, its own heuristic information matrix, and its current solution. An ant checks the new solutions constructed by itself and its neighbors, and updates its current solution if it has found a better one in terms of its own objective. Extensive experiments have been conducted in this paper to study and compare MOEA/D-ACO with other algorithms on two sets of test problems. On the multiobjective 0-1 knapsack problem,MOEA/D-ACO outperforms the MOEA/D with conventional genetic operators and local search on all the nine test instances. We also demonstrate that the heuristic information matrices in MOEA/D-ACO are crucial to the good performance of MOEA/D-ACO for the knapsack problem. On the biobjective traveling salesman problem, MOEA/D-ACO performs much better than the BicriterionAnt on all the 12 test instances. We also evaluate the effects of grouping, neighborhood, and the location information of current solutions on the performance of MOEA/D-ACO. The work in this paper shows that reactive search optimization scheme, i.e., the "learning while optimizing" principle, is effective in improving multiobjective optimization algorithms. PMID:23757576

  18. A Three-level BDDC algorithm for saddle point problems

    SciTech Connect

    Tu, X.

    2008-12-10

    BDDC algorithms have previously been extended to the saddle point problems arising from mixed formulations of elliptic and incompressible Stokes problems. In these two-level BDDC algorithms, all iterates are required to be in a benign space, a subspace in which the preconditioned operators are positive definite. This requirement can lead to large coarse problems, which have to be generated and factored by a direct solver at the beginning of the computation and they can ultimately become a bottleneck. An additional level is introduced in this paper to solve the coarse problem approximately and to remove this difficulty. This three-level BDDC algorithm keeps all iterates in the benign space and the conjugate gradient methods can therefore be used to accelerate the convergence. This work is an extension of the three-level BDDC methods for standard finite element discretization of elliptic problems and the same rate of convergence is obtained for the mixed formulation of the same problems. Estimate of the condition number for this three-level BDDC methods is provided and numerical experiments are discussed.

  19. A novel algorithm for generating libration point orbits about the collinear points

    NASA Astrophysics Data System (ADS)

    Ren, Yuan; Shan, Jinjun

    2014-09-01

    This paper presents a numerical algorithm that can generate long-term libration points orbits (LPOs) and the transfer orbits from the parking orbits to the LPOs in the circular-restricted three-body problem (CR3BP) and the full solar system model without initial guesses. The families of the quasi-periodic LPOs in the CR3BP can also be constructed with this algorithm. By using the dynamical behavior of LPO, the transfer orbit from the parking orbit to the LPO is generated using a bisection method. At the same time, a short segment of the target LPO connected with the transfer orbit is obtained, then the short segment of LPO is extended by correcting the state towards its adjacent point on the stable manifold of the target LPO with differential evolution algorithm. By implementing the correction strategy repeatedly, the LPOs can be extended to any length as needed. Moreover, combining with the continuation procedure, this algorithm can be used to generate the families of the quasi-periodic LPOs in the CR3BP.

  20. Noise reduction in Doppler ultrasound signals using an adaptive decomposition algorithm.

    PubMed

    Zhang, Yufeng; Wang, Le; Gao, Yali; Chen, Jianhua; Shi, Xinling

    2007-07-01

    A novel de-noising method for improving the signal-to-noise ratio (SNR) of Doppler ultrasound blood flow signals, called the matching pursuit method, has been proposed. Using this method, the Doppler ultrasound signal was first decomposed into a linear expansion of waveforms, called time-frequency atoms, which were selected from a redundant dictionary named Gabor functions. Subsequently, a decay parameter-based algorithm was employed to determine the decomposition times. Finally, the de-noised Doppler signal was reconstructed using the selected components. The SNR improvements, the amount of the lost component in the original signal and the maximum frequency estimation precision with simulated Doppler blood flow signals, have been used to evaluate a performance comparison, based on the wavelet, the wavelet packets and the matching pursuit de-noising algorithms. From the simulation and clinical experiment results, it was concluded that the performance of the matching pursuit approach was better than those of the DWT and the WPs methods for the Doppler ultrasound signal de-noising. PMID:16996774

  1. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  2. A maximum power point tracking algorithm for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  3. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    PubMed

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. PMID:26953177

  4. A PARALIND Decomposition-Based Coherent Two-Dimensional Direction of Arrival Estimation Algorithm for Acoustic Vector-Sensor Arrays

    PubMed Central

    Zhang, Xiaofei; Zhou, Min; Li, Jianfeng

    2013-01-01

    In this paper, we combine the acoustic vector-sensor array parameter estimation problem with the parallel profiles with linear dependencies (PARALIND) model, which was originally applied to biology and chemistry. Exploiting the PARALIND decomposition approach, we propose a blind coherent two-dimensional direction of arrival (2D-DOA) estimation algorithm for arbitrarily spaced acoustic vector-sensor arrays subject to unknown locations. The proposed algorithm works well to achieve automatically paired azimuth and elevation angles for coherent and incoherent angle estimation of acoustic vector-sensor arrays, as well as the paired correlated matrix of the sources. Our algorithm, in contrast with conventional coherent angle estimation algorithms such as the forward backward spatial smoothing (FBSS) estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, not only has much better angle estimation performance, even for closely-spaced sources, but is also available for arbitrary arrays. Simulation results verify the effectiveness of our algorithm. PMID:23604030

  5. Parallel data-driven decomposition algorithm for large-scale datasets: with application to transitional boundary layers

    NASA Astrophysics Data System (ADS)

    Sayadi, Taraneh; Schmid, Peter J.

    2016-03-01

    Many fluid flows of engineering interest, though very complex in appearance, can be approximated by low-order models governed by a few modes, able to capture the dominant behavior (dynamics) of the system. This feature has fueled the development of various methodologies aimed at extracting dominant coherent structures from the flow. Some of the more general techniques are based on data-driven decompositions, most of which rely on performing a singular value decomposition (SVD) on a formulated snapshot (data) matrix. The amount of experimentally or numerically generated data expands as more detailed experimental measurements and increased computational resources become readily available. Consequently, the data matrix to be processed will consist of far more rows than columns, resulting in a so-called tall-and-skinny (TS) matrix. Ultimately, the SVD of such a TS data matrix can no longer be performed on a single processor, and parallel algorithms are necessary. The present study employs the parallel TSQR algorithm of (Demmel et al. in SIAM J Sci Comput 34(1):206-239, 2012), which is further used as a basis of the underlying parallel SVD. This algorithm is shown to scale well on machines with a large number of processors and, therefore, allows the decomposition of very large datasets. In addition, the simplicity of its implementation and the minimum required communication makes it suitable for integration in existing numerical solvers and data decomposition techniques. Examples that demonstrate the capabilities of highly parallel data decomposition algorithms include transitional processes in compressible boundary layers without and with induced flow separation.

  6. An Algorithm for Projecting Points onto a Patched CAD Model

    SciTech Connect

    Henshaw, W D

    2001-05-29

    We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.

  7. Algorithm of the automated choice of points of the acupuncture for EHF-therapy

    NASA Astrophysics Data System (ADS)

    Lyapina, E. P.; Chesnokov, I. A.; Anisimov, Ya. E.; Bushuev, N. A.; Murashov, E. P.; Eliseev, Yu. Yu.; Syuzanna, H.

    2007-05-01

    Offered algorithm of the automated choice of points of the acupuncture for EHF-therapy. The recipe formed by algorithm of an automated choice of points for acupunctural actions has a recommendational character. Clinical investigations showed that application of the developed algorithm in EHF-therapy allows to normalize energetic state of the meridians and to effectively solve many problems of an organism functioning.

  8. USER'S GUIDE FOR MPTER, A MULTIPLE POINT GAUSSIAN DISPERSION ALGORITHM WITH OPTIONAL TERRAIN ADJUSTMENT

    EPA Science Inventory

    The information presented in this user's guide is directed to air pollution scientists interested in applying air quality simulation models. MPTER is the designation for Multiple Point source algorithm with TERrain adjustments. This algorithm is useful for estimating air quality ...

  9. Asynchronous space-time algorithm based on a domain decomposition method for structural dynamics problems on non-matching meshes

    NASA Astrophysics Data System (ADS)

    Subber, Waad; Matouš, Karel

    2016-02-01

    Large-scale practical engineering problems featuring localized phenomena often benefit from local control of mesh and time resolutions to efficiently capture the spatial and temporal scales of interest. To this end, we propose an asynchronous space-time algorithm based on a domain decomposition method for structural dynamics problems on non-matching meshes. The three-field algorithm is based on the dual-primal like domain decomposition approach utilizing the localized Lagrange multipliers along the space and time common-refinement-based interface. The proposed algorithm is parallel in nature and well suited for a heterogeneous computing environment. Moreover, two-levels of parallelism are embedded in this novel scheme. For linear dynamical problems, the algorithm is unconditionally stable, shows an optimal order of convergence with respect to space and time discretizations as well as ensures conservation of mass, momentum and energy across the non-matching grid interfaces. The method of manufactured solutions is used to verify the implementation, and an engineering application is considered, where a sandwich plate is impacted by a projectile.

  10. LIFT: a nested decomposition algorithm for solving lower block triangular linear programs. Report AMD-859. [In PL/I for IBM 370

    SciTech Connect

    Ament, D; Ho, J; Loute, E; Remmelswaal, M

    1980-06-01

    Nested decomposition of linear programs is the result of a multilevel, hierarchical application of the Dantzig-Wolfe decomposition principle. The general structure is called lower block-triangular, and permits direct accounting of long-term effects of investment, service life, etc. LIFT, an algorithm for solving lower block triangular linear programs, is based on state-of-the-art modular LP software. The algorithmic and software aspects of LIFT are outlined, and computational results are presented. 5 figures, 6 tables. (RWR)

  11. Technical Note: MRI only prostate radiotherapy planning using the statistical decomposition algorithm

    SciTech Connect

    Siversson, Carl; Nordström, Fredrik; Nilsson, Terese; Nyholm, Tufve; Jonsson, Joakim; Gunnlaugsson, Adalsteinn; Olsson, Lars E.

    2015-10-15

    Purpose: In order to enable a magnetic resonance imaging (MRI) only workflow in radiotherapy treatment planning, methods are required for generating Hounsfield unit (HU) maps (i.e., synthetic computed tomography, sCT) for dose calculations, directly from MRI. The Statistical Decomposition Algorithm (SDA) is a method for automatically generating sCT images from a single MR image volume, based on automatic tissue classification in combination with a model trained using a multimodal template material. This study compares dose calculations between sCT generated by the SDA and conventional CT in the male pelvic region. Methods: The study comprised ten prostate cancer patients, for whom a 3D T2 weighted MRI and a conventional planning CT were acquired. For each patient, sCT images were generated from the acquired MRI using the SDA. In order to decouple the effect of variations in patient geometry between imaging modalities from the effect of uncertainties in the SDA, the conventional CT was nonrigidly registered to the MRI to assure that their geometries were well aligned. For each patient, a volumetric modulated arc therapy plan was created for the registered CT (rCT) and recalculated for both the sCT and the conventional CT. The results were evaluated using several methods, including mean average error (MAE), a set of dose-volume histogram parameters, and a restrictive gamma criterion (2% local dose/1 mm). Results: The MAE within the body contour was 36.5 ± 4.1 (1 s.d.) HU between sCT and rCT. Average mean absorbed dose difference to target was 0.0% ± 0.2% (1 s.d.) between sCT and rCT, whereas it was −0.3% ± 0.3% (1 s.d.) between CT and rCT. The average gamma pass rate was 99.9% for sCT vs rCT, whereas it was 90.3% for CT vs rCT. Conclusions: The SDA enables a highly accurate MRI only workflow in prostate radiotherapy planning. The dosimetric uncertainties originating from the SDA appear negligible and are notably lower than the uncertainties

  12. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  13. A Novel Tracking Algorithm via Feature Points Matching

    PubMed Central

    Luo, Nan; Sun, Quansen; Chen, Qiang; Ji, Zexuan; Xia, Deshen

    2015-01-01

    Visual target tracking is a primary task in many computer vision applications and has been widely studied in recent years. Among all the tracking methods, the mean shift algorithm has attracted extraordinary interest and been well developed in the past decade due to its excellent performance. However, it is still challenging for the color histogram based algorithms to deal with the complex target tracking. Therefore, the algorithms based on other distinguishing features are highly required. In this paper, we propose a novel target tracking algorithm based on mean shift theory, in which a new type of image feature is introduced and utilized to find the corresponding region between the neighbor frames. The target histogram is created by clustering the features obtained in the extraction strategy. Then, the mean shift process is adopted to calculate the target location iteratively. Experimental results demonstrate that the proposed algorithm can deal with the challenging tracking situations such as: partial occlusion, illumination change, scale variations, object rotation and complex background clutter. Meanwhile, it outperforms several state-of-the-art methods. PMID:25617769

  14. Formulation and error analysis for a generalized image point correspondence algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar

    1992-01-01

    A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.

  15. Singular value decomposition-based reconstruction algorithm for seismic traveltime tomography.

    PubMed

    Song, L P; Zhang, S Y

    1999-01-01

    A reconstruction method is given for seismic transmission traveltime tomography. The method is implemented via the combinations of singular value decomposition, appropriate weighting matrices, and variable regularization parameter. The problem is scaled through the weighting matrices so that the singular spectrum is normalized. Matching the normalized singular values, a regularization parameter varies within the interval [0, 1], and linearly increases with singular value index from a small, initial value rather than a fixed one to eliminate the impacts of smaller singular values' components. The experimental results show that the proposed method is superior to the ordinary singular value decomposition (SVD) methods such as truncated SVD and Tikhonov regularization. PMID:18267533

  16. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  17. A well-separated pairs decomposition algorithm for k-d trees implemented on multi-core architectures

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Reid, Ivan D.; Hobson, Peter R.

    2014-06-01

    Variations of k-d trees represent a fundamental data structure used in Computational Geometry with numerous applications in science. For example particle track fitting in the software of the LHC experiments, and in simulations of N-body systems in the study of dynamics of interacting galaxies, particle beam physics, and molecular dynamics in biochemistry. The many-body tree methods devised by Barnes and Hutt in the 1980s and the Fast Multipole Method introduced in 1987 by Greengard and Rokhlin use variants of k-d trees to reduce the computation time upper bounds to O(n log n) and even O(n) from O(n2). We present an algorithm that uses the principle of well-separated pairs decomposition to always produce compressed trees in O(n log n) work. We present and evaluate parallel implementations for the algorithm that can take advantage of multi-core architectures.

  18. An automatic registration algorithm for the scattered point clouds based on the curvature feature

    NASA Astrophysics Data System (ADS)

    He, Bingwei; Lin, Zeming; Li, Y. F.

    2013-03-01

    Object modeling by the registration of multiple range images has important applications in reverse engineering and computer vision. In order to register multi-view scattered point clouds, a novel curvature-based automatic registration algorithm is proposed in this paper, which can solve the registration problem with partial overlapping point clouds. For two sets of scattered point clouds, the curvature of each point is estimated by using the quadratic surface fitting method. The feature points that have the maximum local curvature variations are then extracted. The initial matching points are acquired by computing the Hausdorff distance of curvature, and then the circumference shape feature of the local surface is used to obtain the accurate matching points from the initial matching points. Finally, the rotation and translation matrix are estimated by the quaternion, and an iterative algorithm is used to improve the registration accuracy. Experimental results show that the algorithm is effective.

  19. Point group identification algorithm in dynamic response analysis of nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Li, Tao; Chen, Jian-bing; Li, Jie

    2016-03-01

    The point group identification (PGI) algorithm is proposed to determine the representative point sets in response analysis of nonlinear stochastic dynamic systems. The PGI algorithm is employed to identify point groups and their feature points in an initial point set by combining subspace clustering analysis and the graph theory. Further, the representative point set of the random-variate space is determined according to the minimum generalized F-discrepancy. The dynamic responses obtained by incorporating the algorithm PGI into the probability density evolution method (PDEM) are compared with those by the Monte Carlo simulation method. The investigations indicate that the proposed method can reduce the number of the representative points, lower the generalized F-discrepancy of the representative point set, and also ensure the accuracy of stochastic structural dynamic analysis.

  20. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  1. An algorithm for point cluster generalization based on the Voronoi diagram

    NASA Astrophysics Data System (ADS)

    Yan, Haowen; Weibel, Robert

    2008-08-01

    This paper presents an algorithm for point cluster generalization. Four types of information, i.e. statistical, thematic, topological, and metric information are considered, and measures are selected to describe corresponding types of information quantitatively in the algorithm, i.e. the number of points for statistical information, the importance value for thematic information, the Voronoi neighbors for topological information, and the distribution range and relative local density for metric information. Based on these measures, an algorithm for point cluster generalization is developed. Firstly, point clusters are triangulated and a border polygon of the point clusters is obtained. By the border polygon, some pseudo points are added to the original point clusters to form a new point set and a range polygon that encloses all original points is constructed. Secondly, the Voronoi polygons of the new point set are computed in order to obtain the so-called relative local density of each point. Further, the selection probability of each point is computed using its relative local density and importance value, and then mark those will-be-deleted points as 'deleted' according to their selection probabilities and Voronoi neighboring relations. Thirdly, if the number of retained points does not satisfy that computed by the Radical Law, physically delete the points marked as 'deleted' forming a new point set, and the second step is repeated; else physically deleted pseudo points and the points marked as 'deleted', and the generalized point clusters are achieved. Owing to the use of the Voronoi diagram the algorithm is parameter free and fully automatic. As our experiments show, it can be used in the generalization of point features arranged in clusters such as thematic dot maps and control points on cartographic maps.

  2. An efficient, robust, domain-decomposition algorithm for particle Monte Carlo

    SciTech Connect

    Brunner, Thomas A. Brantley, Patrick S.

    2009-06-01

    A previously described algorithm [T.A. Brunner, T.J. Urbatsch, T.M. Evans, N.A. Gentile, Comparison of four parallel algorithms for domain decomposed implicit Monte Carlo, Journal of Computational Physics 212 (2) (2006) 527-539] for doing domain decomposed particle Monte Carlo calculations in the context of thermal radiation transport has been improved. It has been extended to support cases where the number of particles in a time step are unknown at the beginning of the time step. This situation arises when various physical processes, such as neutron transport, can generate additional particles during the time step, or when particle splitting is used for variance reduction. Additionally, several race conditions that existed in the previous algorithm and could cause code hangs have been fixed. This new algorithm is believed to be robust against all race conditions. The parallel scalability of the new algorithm remains excellent.

  3. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    NASA Astrophysics Data System (ADS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-03-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct.

  4. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    NASA Technical Reports Server (NTRS)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  5. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation

    PubMed Central

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  6. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation.

    PubMed

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  7. A path-following interior-point algorithm for linear and quadratic problems

    SciTech Connect

    Wright, S.J.

    1993-12-01

    We describe an algorithm for the monotone linear complementarity problem that converges for many positive, not necessarily feasible, starting point and exhibits polynomial complexity if some additional assumptions are made on the starting point. If the problem has a strictly complementary solution, the method converges subquadratically. We show that the algorithm and its convergence extend readily to the mixed monotone linear complementarity problem and, hence, to all the usual formulations of the linear programming and convex quadratic programming problems.

  8. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms.

    PubMed

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration. PMID:27036770

  9. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms

    NASA Astrophysics Data System (ADS)

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration.

  10. Improved scaling of time-evolving block-decimation algorithm through reduced-rank randomized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Tamascelli, D.; Rosenbach, R.; Plenio, M. B.

    2015-06-01

    When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the time-evolving block-decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the singular value decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied and demonstrate that for those systems RRSVD delivers results as accurate as state-of-the-art deterministic SVD routines.

  11. Iterative most-likely point registration (IMLP): a robust algorithm for computing optimal shape alignment.

    PubMed

    Billings, Seth D; Boctor, Emad M; Taylor, Russell H

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP's probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  12. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    PubMed Central

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  13. Parallelized Characteristic Basis Finite Element Method (CBFEM-MPI)-A non-iterative domain decomposition algorithm for electromagnetic scattering problems

    SciTech Connect

    Ozgun, Ozlem Mittra, Raj; Kuzuoglu, Mustafa

    2009-04-01

    In this paper, we introduce a parallelized version of a novel, non-iterative domain decomposition algorithm, called Characteristic Basis Finite Element Method (CBFEM-MPI), for efficient solution of large-scale electromagnetic scattering problems, by utilizing a set of specially defined characteristic basis functions (CBFs). This approach is based on the decomposition of the computational domain into a number of non-overlapping subdomains wherein the CBFs are generated by employing a novel procedure, which differs from all those that have been used in the past. Clearly, the CBFs are obtained by calculating the fields radiated by a finite number of dipole-type sources, which are placed hypothetically along the boundary of the conducting object. The major advantages of the proposed technique are twofold: (i) it provides a substantial reduction in the matrix size, and thus, makes use of direct solvers efficiently and (ii) it enables the utilization of parallel processing techniques that considerably decrease the overall computation time. We illustrate the application of the proposed approach via several 3D electromagnetic scattering problems.

  14. Parameter Space of Fixed Points of the Damped Driven Pendulum Susceptible to Control of Chaos Algorithms

    NASA Astrophysics Data System (ADS)

    Dittmore, Andrew; Trail, Collin; Olsen, Thomas; Wiener, Richard J.

    2003-11-01

    We have previously demonstrated the experimental control of chaos in a Modified Taylor-Couette system with hourglass geometry( Richard J. Wiener et al), Phys. Rev. Lett. 83, 2340 (1999).. Identifying fixed points susceptible to algorithms for the control of chaos is key. We seek to learn about this process in the accessible numerical model of the damped, driven pendulum. Following Baker(Gregory L. Baker, Am. J. Phys. 63), 832 (1995)., we seek points susceptible to the OGY(E. Ott, C. Grebogi, and J. A. Yorke, Phys. Rev. Lett. 64), 1196 (1990). algorithm. We automate the search for fixed points that are candidates for control. We present comparisons of the space of candidate fixed points with the bifurcation diagrams and Poincare sections of the system. We demonstrate control at fixed points which do not appear on the attractor. We also show that the control algorithm may be employed to shift the system between non-communicating branches of the attractor.

  15. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature. PMID:12502302

  16. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  17. Double-patterning decomposition, design compliance, and verification algorithms at 32nm hp

    NASA Astrophysics Data System (ADS)

    Tritchkov, Alexander; Glotov, Petr; Komirenko, Sergiy; Sahouria, Emile; Torres, Andres; Seoud, Ahmed; Wiaux, Vincent

    2008-10-01

    Double patterning (DP) technology is one of the main candidates for RET of critical layers at 32nm hp. DP technology is a strong RET technique that must be considered throughout the IC design and post tapeout flows. We present a complete DP technology strategy including a DRC/DFM component, physical synthesis support and mask synthesis. In particular, the methodology contains: - A DRC-like layout DP compliance and design verification functions; - A parameterization scheme that codifies manufacturing knowledge and capability; - Judicious use of physical effect simulation to improve double-patterning quality; - An efficient, high capacity mask synthesis function for post-tapeout processing; - A verification function to determine the correctness and qualify of a DP solution; Double patterning technology requires decomposition of the design to relax the pitch and effectively allows processing with k1 factors smaller than the theoretical Rayleigh limit of 0.25. The traditional DP processes Litho-Etch-Litho- Etch (LELE) [1] requires an additional develop and etch step, which eliminates the resolution degradation which occurs in multiple exposure processed in the same resist layer. The theoretical k1 for a double-patterning technology applied to a 32nm half-pitch design using a 1.35NA 193nm imaging system is 0.44, whereas the k1 for a single-patterning of this same design would be 0.22 [2], which is sub-resolution. This paper demonstrates the methods developed at Mentor Graphics for double patterning design compliance and decomposition in an effort to minimize the impact of mask-to-mask registration and process variance. It also demonstrates verification solution implementation in the chip design flow and post-tapeout flow.

  18. A fast algorithm for exact sequence search in biological sequences using polyphase decomposition

    PubMed Central

    Srikantha, Abhilash; Bopardikar, Ajit S.; Kaipa, Kalyan Kumar; Venkataraman, Parthasarathy; Lee, Kyusang; Ahn, TaeJin; Narayanan, Rangavittal

    2010-01-01

    Motivation: Exact sequence search allows a user to search for a specific DNA subsequence in a larger DNA sequence or database. It serves as a vital block in many areas such as Pharmacogenetics, Phylogenetics and Personal Genomics. As sequencing of genomic data becomes increasingly affordable, the amount of sequence data that must be processed will also increase exponentially. In this context, fast sequence search algorithms will play an important role in exploiting the information contained in the newly sequenced data. Many existing algorithms do not scale up well for large sequences or databases because of their high-computational costs. This article describes an efficient algorithm for performing fast searches on large DNA sequences. It makes use of hash tables of Q-grams that are constructed after downsampling the database, to enable efficient search and memory use. Time complexity for pattern search is reduced using beam pruning techniques. Theoretical complexity calculations and performance figures are presented to indicate the potential of the proposed algorithm. Contact: s.abhilash@samsung.com; ajit.b@samsung.com PMID:20823301

  19. Filtering of LIDAR Point Cloud Using a Strip Based Algorithm in Residential Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Arefi, H.; Gharib, Z.

    2014-10-01

    Several algorithms have been developed to automatically detect the bare earth in LIDAR point clouds referred to as filtering. Previous experimental study on filtering algorithms determined that in flat and uncomplicated landscapes, algorithms tend to do well. Significant differences in accuracies of filtering appear in landscapes containing steep slopes and discontinuities. A solution for this problem is the segmentation of ALS point clouds. In this paper a new segmentation has been developed. The algorithm starts with first slicing a point cloud into contiguous and parallel profiles in different directions. Then the points in each profile are segmented into polylines based on distance and elevation proximity. The segmentation in each profile yields polylines. The polylines are then linked together through their common points to obtain surface segments. At the final stage, the data is partitioned into some windows in which the strips are exploited to analysis the points with regard to the height differences through them. In this case the whole data could be fully segmented into ground and non-ground measurements, sequentially via the strips which make the algorithm fast to implement.

  20. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  1. A point-cloud-based multiview stereo algorithm for free-viewpoint video.

    PubMed

    Liu, Yebin; Dai, Qionghai; Xu, Wenli

    2010-01-01

    This paper presents a robust multiview stereo (MVS) algorithm for free-viewpoint video. Our MVS scheme is totally point-cloud-based and consists of three stages: point cloud extraction, merging, and meshing. To guarantee reconstruction accuracy, point clouds are first extracted according to a stereo matching metric which is robust to noise, occlusion, and lack of texture. Visual hull information, frontier points, and implicit points are then detected and fused with point fidelity information in the merging and meshing steps. All aspects of our method are designed to counteract potential challenges in MVS data sets for accurate and complete model reconstruction. Experimental results demonstrate that our technique produces the most competitive performance among current algorithms under sparse viewpoint setups according to both static and motion MVS data sets. PMID:20224136

  2. A Jitter-Mitigating High Gain Antenna Pointing Algorithm for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia; Blaurock, Carl

    2007-01-01

    This paper details a High Gain Antenna (HGA) pointing algorithm which mitigates jitter during the motion of the antennas on the Solar Dynamics Observatory (SDO) spacecraft. SDO has two HGAs which point towards the Earth and send data to a ground station at a high rate. These antennas are required to track the ground station during the spacecraft Inertial and Science modes, which include periods of inertial Sunpointing as well as calibration slews. The HGAs also experience handoff seasons, where the antennas trade off between pointing at the ground station and pointing away from the Earth. The science instruments on SDO require fine Sun pointing and have a very low jitter tolerance. Analysis showed that the nominal tracking and slewing motions of the antennas cause enough jitter to exceed the HGA portion of the jitter budget. The HGA pointing control algorithm was expanded from its original form as a means to mitigate the jitter.

  3. Seismic small-scale discontinuity sparsity-constraint inversion method using a penalty decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jingtao; Peng, Suping; Du, Wenfeng

    2016-02-01

    We consider sparsity-constraint inversion method for detecting seismic small-scale discontinuities, such as edges, faults and cavities, which provide rich information about petroleum reservoirs. However, where there is karstification and interference caused by macro-scale fault systems, these seismic small-scale discontinuities are hard to identify when using currently available discontinuity-detection methods. In the subsurface, these small-scale discontinuities are separately and sparsely distributed and their seismic responses occupy a very small part of seismic image. Considering these sparsity and non-smooth features, we propose an effective L 2-L 0 norm model for improvement of their resolution. First, we apply a low-order plane-wave destruction method to eliminate macro-scale smooth events. Then, based the residual data, we use a nonlinear structure-enhancing filter to build a L 2-L 0 norm model. In searching for its solution, an efficient and fast convergent penalty decomposition method is employed. The proposed method can achieve a significant improvement in enhancing seismic small-scale discontinuities. Numerical experiment and field data application demonstrate the effectiveness and feasibility of the proposed method in studying the relevant geology of these reservoirs.

  4. Dominant feature selection for the fault diagnosis of rotary machines using modified genetic algorithm and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Yan, Jihong; de Silva, Clarence W.

    2015-05-01

    This paper develops a novel dominant feature selection method using a genetic algorithm with a dynamic searching strategy. It is applied in the search for the most representative features in rotary mechanical fault diagnosis, and is shown to improve the classification performance with fewer features. First, empirical mode decomposition (EMD) is employed to decompose a vibration signal into intrinsic mode functions (IMFs) which represent the signal characteristic with sample oscillatory modes. Then, a modified genetic algorithm with variable-range encoding and dynamic searching strategy is used to establish relationships between optimized feature subsets and the classification performance. Next, a statistical model that uses receiver operating characteristic (ROC) is developed to select dominant features. Finally, support vector machine (SVM) is used to classify different fault patterns. Two real-world problems, rotor-unbalance vibration and bearing corrosion, are employed to evaluate the proposed feature selection scheme and fault diagnosis system. Statistical results obtained by analyzing the two problems, and comparative studies with five well-known feature selection techniques, demonstrate that the method developed in this paper can achieve improvements in identification accuracy with lower feature dimensionality. In addition, the results indicate that the proposed method is a promising tool to select dominant features in rotary machinery fault diagnosis.

  5. Iterative stability analysis of spatial domain decomposition based on block Jacobi algorithm for the diamond-difference scheme

    NASA Astrophysics Data System (ADS)

    Anistratov, Dmitriy Y.; Azmy, Yousry Y.

    2015-09-01

    We study convergence of the integral transport matrix method (ITMM) based on a parallel block Jacobi (PBJ) iterative strategy for solving particle transport problems. The ITMM is a spatial domain decomposition method proposed for massively parallel computations. A Fourier analysis of the PBJ-based iterations applied to SN diamond-difference equations in 1D slab and 2D Cartesian geometries is performed. It is carried out for infinite-medium problems with homogeneous material properties. To analyze the performance of the ITMM with the PBJ algorithm and evaluate its potential in scalability we consider a limiting case of one spatial cell per subdomain. The analysis shows that in such limit the spectral radius of the iteration method is one without regard to values of the scattering ratio and optical thickness of the spatial cells. This implies lack of convergence in infinite medium. Numerical results of finite-medium problems are presented. They demonstrate effects of finite size of spatial domain on the performance of the iteration algorithm as well as its asymptotic behavior when the extent of the spatial domain increases. These numerical experiments also show that for finite domains iterative convergence to a finite criterion is achievable in a multiple of the sum of number of cells in each dimension.

  6. A Gabor subband decomposition ICA and MRF hybrid algorithm for infrared image reconstruction from subpixel shifted sequences

    NASA Astrophysics Data System (ADS)

    Yi-nan, Chen; Wei-qi, Jin; Ling-Xue, Wang; Lei, Zhao; Hong-sheng, Yu

    2009-03-01

    An image blind reconstruction, as a blind source separation problem, has been solved recently by independent component analysis (ICA). Based on ICA theory, in this paper, a high resolution image is reconstructed from low resolution and subpixel shifted sequences captured by infrared microscan imaging system. The algorithm has the attractive feature that neither the prior knowledge of the blur kernel nor the value of subpixel misregistrations between the input channels is required. The statistical independence in the image domain is improved by the multiscale Gabor subband decompositions, which are designed for the best ability to cover the whole spatial frequency and to avoid overlapping between the subbands. The mutual information is employed to locate a subband with the least dependent components. In terms of MAP estimator, we combine the super-Gaussian with Markov random field to form a hybrid image distribution. This strategy helps to estimate the separating matrix reasonable to extract the sources with the image properties, that is, sharp enough as well as correlative in local area. The proposed algorithm is capable of performing high resolution image sources which are not strictly independent, and its viability is proved by the computer simulations and real experiments.

  7. A new constrained fixed-point algorithm for ordering independent components

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjuan; Guo, Chonghui; Shi, Zhenwei; Feng, Enmin

    2008-10-01

    Independent component analysis (ICA) aims to recover a set of unknown mutually independent components (ICs) from their observed mixtures without knowledge of the mixing coefficients. In the classical ICA model there exists ICs' indeterminacy on permutation and dilation. Constrained ICA is one of methods for solving this problem through introducing constraints into the classical ICA model. In this paper we first present a new constrained ICA model which composed of three parts: a maximum likelihood criterion as an objective function, statistical measures as inequality constraints and the normalization of demixing matrix as equality constraints. Next, we incorporate the new fixed-point (newFP) algorithm into this constrained ICA model to construct a new constrained fixed-point algorithm. Computation simulations on synthesized signals and speech signals demonstrate that this combination both can eliminate ICs' indeterminacy to a certain extent, and can provide better performance. Moreover, comparison results with the existing algorithm verify the efficiency of our new algorithm furthermore, and show that it is more simple to implement than the existing algorithm due to its advantage of not using the learning rate. Finally, this new algorithm is also applied for the real-world fetal ECG data, experiment results further indicate the efficiency of the new constrained fixed-point algorithm.

  8. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  9. Performance Evaluation of Different Ground Filtering Algorithms for Uav-Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Serifoglu, C.; Gungor, O.; Yilmaz, V.

    2016-06-01

    Digital Elevation Model (DEM) generation is one of the leading application areas in geomatics. Since a DEM represents the bare earth surface, the very first step of generating a DEM is to separate the ground and non-ground points, which is called ground filtering. Once the point cloud is filtered, the ground points are interpolated to generate the DEM. LiDAR (Light Detection and Ranging) point clouds have been used in many applications thanks to their success in representing the objects they belong to. Hence, in the literature, various ground filtering algorithms have been reported to filter the LiDAR data. Since the LiDAR data acquisition is still a costly process, using point clouds generated from the UAV images to produce DEMs is a reasonable alternative. In this study, point clouds with three different densities were generated from the aerial photos taken from a UAV (Unmanned Aerial Vehicle) to examine the effect of point density on filtering performance. The point clouds were then filtered by means of five different ground filtering algorithms as Progressive Morphological 1D (PM1D), Progressive Morphological 2D (PM2D), Maximum Local Slope (MLS), Elevation Threshold with Expand Window (ETEW) and Adaptive TIN (ATIN). The filtering performance of each algorithm was investigated qualitatively and quantitatively. The results indicated that the ATIN and PM2D algorithms showed the best overall ground filtering performances. The MLS and ETEW algorithms were found as the least successful ones. It was concluded that the point clouds generated from the UAVs can be a good alternative for LiDAR data.

  10. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform.

    PubMed

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A; Kiefer, Richard; Rasmussen, Luke V; Pathak, Jyotishman; Denny, Joshua C; Thompson, William K

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  11. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform

    PubMed Central

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  12. Registration of range data using a hybrid simulated annealing and iterative closest point algorithm

    SciTech Connect

    LUCK,JASON; LITTLE,CHARLES Q.; HOFF,WILLIAM

    2000-04-17

    The need to register data is abundant in applications such as: world modeling, part inspection and manufacturing, object recognition, pose estimation, robotic navigation, and reverse engineering. Registration occurs by aligning the regions that are common to multiple images. The largest difficulty in performing this registration is dealing with outliers and local minima while remaining efficient. A commonly used technique, iterative closest point, is efficient but is unable to deal with outliers or avoid local minima. Another commonly used optimization algorithm, simulated annealing, is effective at dealing with local minima but is very slow. Therefore, the algorithm developed in this paper is a hybrid algorithm that combines the speed of iterative closest point with the robustness of simulated annealing. Additionally, a robust error function is incorporated to deal with outliers. This algorithm is incorporated into a complete modeling system that inputs two sets of range data, registers the sets, and outputs a composite model.

  13. An optimized structure on FPGA of key point description in SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Chenyu; Peng, Jinlong; Zhu, En; Zou, Yuxin

    2015-12-01

    SIFT algorithm is one of the most significant and effective algorithms to describe the features of image in the field of image matching. To implement SIFT algorithm to hardware environment is apparently considerable and difficult. In this paper, we mainly discuss the realization of Key Point Description in SIFT algorithm, along with Matching process. In Key Point Description, we have proposed a new method of generating histograms, to avoid the rotation of adjacent regions and insure the rotational invariance. In Matching, we replace conventional Euclidean distance with Hamming distance. The results of the experiments fully prove that the structure we propose is real-time, accurate, and efficient. Future work is still needed to improve its performance in harsher conditions.

  14. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  15. Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm

    NASA Astrophysics Data System (ADS)

    Kaczałek, B.; Borkowski, A.

    2016-06-01

    The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.

  16. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  17. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration

    PubMed Central

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  18. Prostate tissue decomposition via DECT using the model based iterative image reconstruction algorithm DIRA

    NASA Astrophysics Data System (ADS)

    Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Westin, Robin; Alm Carlsson, Gudrun

    2014-03-01

    Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.

  19. Modified Cholesky factorizations in interior-point algorithms for linear programming.

    SciTech Connect

    Wright, S.; Mathematics and Computer Science

    1999-01-01

    We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.

  20. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  1. Stitching algorithm of the images acquired from different points of fixation

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Pismenskova, M. M.

    2015-02-01

    Image mosaicing is the act of combining two or more images and is used in many applications in computer vision, image processing, and computer graphics. It aims to combine images such that no obstructive boundaries exist around overlapped regions and to create a mosaic image that exhibits as little distortion as possible from the original images. Most of the existing algorithms are the computationally complex and don't show good results always in obtaining of the stitched images, which are different: scale, light, various free points of view and others. In this paper we consider an algorithm which allows increasing the speed of processing in the case of stitching high-resolution images. We reduced the computational complexity used an edge image analysis and saliency map on high-detailisation areas. On detected areas are determined angles of rotation, scaling factors, the coefficients of the color correction and transformation matrix. We define key points using SURF detector and ignore false correspondences based on correlation analysis. The proposed algorithm allows to combine images from free points of view with the different color balances, time shutter and scale. We perform a comparative study and show that statistically, the new algorithm deliver good quality results compared to existing algorithms.

  2. Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localization error.

    PubMed

    Maier-Hein, Lena; Franz, Alfred M; dos Santos, Thiago R; Schmidt, Mirko; Fangerau, Markus; Meinzer, Hans-Peter; Fitzpatrick, J Michael

    2012-08-01

    Since its introduction in the early 1990s, the Iterative Closest Point (ICP) algorithm has become one of the most well-known methods for geometric alignment of 3D models. Given two roughly aligned shapes represented by two point sets, the algorithm iteratively establishes point correspondences given the current alignment of the data and computes a rigid transformation accordingly. From a statistical point of view, however, it implicitly assumes that the points are observed with isotropic Gaussian noise. In this paper, we show that this assumption may lead to errors and generalize the ICP such that it can account for anisotropic and inhomogenous localization errors. We 1) provide a formal description of the algorithm, 2) extend it to registration of partially overlapping surfaces, 3) prove its convergence, 4) derive the required covariance matrices for a set of selected applications, and 5) present means for optimizing the runtime. An evaluation on publicly available surface meshes as well as on a set of meshes extracted from medical imaging data shows a dramatic increase in accuracy compared to the original ICP, especially in the case of partial surface registration. As point-based surface registration is a central component in various applications, the potential impact of the proposed method is high. PMID:22184256

  3. CASH algorithm versus 3-point checklist and its modified version in evaluation of melanocytic pigmented skin lesions: The 4-point checklist.

    PubMed

    di Meo, Nicola; Stinco, Giuseppe; Bonin, Serena; Gatti, Alessandro; Trevisini, Sara; Damiani, Giovanni; Vichi, Silvia; Trevisan, Giusto

    2016-06-01

    Dermoscopy, in expert hands, increases accuracy, sensitivity and specificity in diagnosis of pigmented skin lesions of a single operator, compared with clinical examination. Simplified algorithmic methods have been developed to help less expert dermoscopists in diagnosis of melanocytic lesions. This study included 125 melanocytic skin lesions divided into melanocytic nevi, dysplastic nevi and thin melanomas (<1 mm). We compared the 3-point checklist and CASH algorithm to analyze different pigmented skin lesions. Based on preliminary results, we proposed a new modified algorithm, called the 4-point checklist, whose accuracy is similar to the CASH algorithm and whose simplicity is similar to the 3-point checklist. PMID:26589251

  4. A hybrid algorithm for multiple change-point detection in continuous measurements

    NASA Astrophysics Data System (ADS)

    Priyadarshana, W. J. R. M.; Polushina, T.; Sofronov, G.

    2013-10-01

    Array comparative genomic hybridization (aCGH) is one of the techniques that can be used to detect copy number variations in DNA sequences. It has been identified that abrupt changes in the human genome play a vital role in the progression and development of many diseases. We propose a hybrid algorithm that utilizes both the sequential techniques and the Cross-Entropy method to estimate the number of change points as well as their locations in aCGH data. We applied the proposed hybrid algorithm to both artificially generated data and real data to illustrate the usefulness of the methodology. Our results show that the proposed algorithm is an effective method to detect multiple change-points in continuous measurements.

  5. Point process algorithm: a new Bayesian approach for TPF-I planet signal extraction

    NASA Technical Reports Server (NTRS)

    Velusamy, T.; Marsh, K. A.; Ware, B.

    2005-01-01

    TPF-I capability for planetary signal extraction, including both detection and spectral characterization, can be optimized by taking proper account of instrumental characteristics and astrophysical prior information. We have developed the Point Process Algorithm, a Bayesian technique for estracting planetary signals using the sine/cosine chopped outputs of a dual nulling interferometer.

  6. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  7. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  8. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  9. An affine point-set and line invariant algorithm for photo-identification of gray whales

    NASA Astrophysics Data System (ADS)

    Chandan, Chandan; Kehtarnavaz, Nasser; Hillman, Gilbert; Wursig, Bernd

    2004-05-01

    This paper presents an affine point-set and line invariant algorithm within a statistical framework, and its application to photo-identification of gray whales (Eschrichtius robustus). White patches (blotches) appearing on a gray whale's left and right flukes (the flattened broad paddle-like tail) constitute unique identifying features and have been used here for individual identification. The fluke area is extracted from a fluke image via the live-wire edge detection algorithm, followed by optimal thresholding of the fluke area to obtain the blotches. Affine point-set and line invariants of the blotch points are extracted based on three reference points, namely the left and right tips and the middle notch-like point on the fluke. A set of statistics is derived from the invariant values and used as the feature vector representing a database image. The database images are then ranked depending on the degree of similarity between a query and database feature vectors. The results show that the use of this algorithm leads to a reduction in the amount of manual search that is normally done by marine biologists.

  10. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information

    PubMed Central

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  11. A target location and pointing algorithm for a three-axis stabilized line scanner (AMIDARS)

    NASA Astrophysics Data System (ADS)

    Algrain, Marcelo C.

    1990-09-01

    An algorithm is presented for calculating the location of a target and for pointing other imaging sensors to it, given the position of an aircraft, its attitude, and its altitude and the gimbal angles of the stabilized platform. The algorithm uses geometric relationships to define the line of sight (LOS) direction in inertial space and to determine the position of the center of a scan line where the LOS intersects the ground. The direction of a scale line passing through that point is also calculated completely defining the location of any target on a scan line. The ground dimensions obtained form this procedure are then related to a point of known latitude and longitude to define the overall target location.

  12. Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information.

    PubMed

    Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li

    2016-01-01

    Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms. PMID:26866918

  13. Sequential structural damage diagnosis algorithm using a change point detection method

    NASA Astrophysics Data System (ADS)

    Noh, H.; Rajagopal, R.; Kiremidjian, A. S.

    2013-11-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  14. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    NASA Technical Reports Server (NTRS)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  15. Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Ianculescu, G. D.; Klop, J. J.

    1992-01-01

    Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom are designed using a continuous rigid body model of the solar array gimbal assembly containing both linear and nonlinear dynamics due to various friction components. The robustness of the design solution is examined by performing a series of sensitivity analysis studies. Adaptive control strategies are examined in order to compensate for the unfavorable effect of static nonlinearities, such as dead-zone uncertainties.

  16. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  17. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    NASA Astrophysics Data System (ADS)

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-08-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.

  18. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    PubMed Central

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  19. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions.

    PubMed

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A; Bishop, Logan D C; Kelly, Kevin F; Landes, Christy F

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  20. Extension of an iterative closest point algorithm for simultaneous localization and mapping in corridor environments

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua

    2016-03-01

    Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.

  1. An Error Analysis of the Phased Array Antenna Pointing Algorithm for STARS Flight Demonstration No. 2

    NASA Technical Reports Server (NTRS)

    Carney, Michael P.; Simpson, James C.

    2005-01-01

    STARS is a multicenter NASA project to determine the feasibility of using space-based assets, such as the Tracking and Data Relay Satellite System (TDRSS) and Global Positioning System (GPS), to increase flexibility (e.g. increase the number of possible launch locations and manage simultaneous operations) and to reduce operational costs by decreasing the need for ground-based range assets and infrastructure. The STARS project includes two major systems: the Range Safety and Range User systems. The latter system uses broadband communications (125 kbps to 500 kbps) for voice, video, and vehicle/payload data. Flight Demonstration #1 revealed the need to increase the data rate of the Range User system. During Flight Demo #2, a Ku-band antenna will generate a higher data rate and will be designed with an embedded pointing algorithm to guarantee that the antenna is pointed directly at TDRS. This algorithm will utilize the onboard position and attitude data to point the antenna to TDRS within a 2-degree full-angle beamwidth. This report investigates how errors in aircraft position and attitude, along with errors in satellite position, propagate into the overall pointing vector.

  2. Floating-Point Units and Algorithms for field-programmable gate arrays

    SciTech Connect

    Underwood, Keith D.; Hemmert, K. Scott

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and we are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and

  3. Floating-Point Units and Algorithms for field-programmable gate arrays

    Energy Science and Technology Software Center (ESTSC)

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of themore » BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and we are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used

  4. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean

    2016-05-01

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  5. An upwind-biased, point-implicit relaxation algorithm for viscous, compressible perfect-gas flows

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    1990-01-01

    An upwind-biased, point-implicit relaxation algorithm for obtaining the numerical solution to the governing equations for three-dimensional, viscous, compressible, perfect-gas flows is described. The algorithm is derived using a finite-volume formulation in which the inviscid components of flux across cell walls are described with Roe's averaging and Harten's entropy fix with second-order corrections based on Yee's Symmetric Total Variation Diminishing scheme. Viscous terms are discretized using central differences. The relaxation strategy is well suited for computers employing either vector or parallel architectures. It is also well suited to the numerical solution of the governing equations on unstructured grids. Because of the point-implicit relaxation strategy, the algorithm remains stable at large Courant numbers without the necessity of solving large, block tri-diagonal systems. Convergence rates and grid refinement studies are conducted for Mach 5 flow through an inlet with a 10 deg compression ramp and Mach 14 flow over a 15 deg ramp. Predictions for pressure distributions, surface heating, and aerodynamics coefficients compare well with experiment data for Mach 10 flow over a blunt body.

  6. Artifact Removal from Biosignal using Fixed Point ICA Algorithm for Pre-processing in Biometric Recognition

    NASA Astrophysics Data System (ADS)

    Mishra, Puneet; Singla, Sunil Kumar

    2013-01-01

    In the modern world of automation, biological signals, especially Electroencephalogram (EEG) and Electrocardiogram (ECG), are gaining wide attention as a source of biometric information. Earlier studies have shown that EEG and ECG show versatility with individuals and every individual has distinct EEG and ECG spectrum. EEG (which can be recorded from the scalp due to the effect of millions of neurons) may contain noise signals such as eye blink, eye movement, muscular movement, line noise, etc. Similarly, ECG may contain artifact like line noise, tremor artifacts, baseline wandering, etc. These noise signals are required to be separated from the EEG and ECG signals to obtain the accurate results. This paper proposes a technique for the removal of eye blink artifact from EEG and ECG signal using fixed point or FastICA algorithm of Independent Component Analysis (ICA). For validation, FastICA algorithm has been applied to synthetic signal prepared by adding random noise to the Electrocardiogram (ECG) signal. FastICA algorithm separates the signal into two independent components, i.e. ECG pure and artifact signal. Similarly, the same algorithm has been applied to remove the artifacts (Electrooculogram or eye blink) from the EEG signal.

  7. TU-F-18A-04: Use of An Image-Based Material-Decomposition Algorithm for Multi-Energy CT to Determine Basis Material Densities

    SciTech Connect

    Li, Z; Leng, S; Yu, L; McCollough, C

    2014-06-15

    Purpose: Published methods for image-based material decomposition with multi-energy CT images have required the assumption of volume conservation or accurate knowledge of the x-ray spectra and detector response. The purpose of this work was to develop an image-based material-decomposition algorithm that can overcome these limitations. Methods: An image-based material decomposition algorithm was developed that requires only mass conservation (rather than volume conservation). With this method, using multi-energy CT measurements made with n=4 energy bins, the mass density of each basis material and of the mixture can be determined without knowledge of the tube spectra and detector response. A digital phantom containing 12 samples of mixtures from water, calcium, iron, and iodine was used in the simulation (Siemens DRASIM). The calibration was performed by using pure materials at each energy bin. The accuracy of the technique was evaluated in noise-free and noisy data under the assumption of an ideal photon-counting detector. Results: Basis material densities can be estimated accurately by either theoretic calculation or calibration with known pure materials. The calibration approach requires no prior information about the spectra and detector response. Regression analysis of theoretical values versus estimated values results in excellent agreement for both noise-free and noisy data. For the calibration approach, the R-square values are 0.9960+/−0.0025 and 0.9476+/−0.0363 for noise-free and noisy data, respectively. Conclusion: From multi-energy CT images with n=4 energy bins, the developed image-based material decomposition method accurately estimated 4 basis material density (3 without k-edge and 1 with in the range of the simulated energy bins) even without any prior information about spectra and detector response. This method is applicable to mixtures of solutions and dissolvable materials, where volume conservation assumptions do not apply. CHM receives

  8. Steering quantum dynamics via bang-bang control: Implementing optimal fixed-point quantum search algorithm

    NASA Astrophysics Data System (ADS)

    Bhole, Gaurav; Anjusha, V. S.; Mahesh, T. S.

    2016-04-01

    A robust control over quantum dynamics is of paramount importance for quantum technologies. Many of the existing control techniques are based on smooth Hamiltonian modulations involving repeated calculations of basic unitaries resulting in time complexities scaling rapidly with the length of the control sequence. Here we show that bang-bang controls need one-time calculation of basic unitaries and hence scale much more efficiently. By employing a global optimization routine such as the genetic algorithm, it is possible to synthesize not only highly intricate unitaries, but also certain nonunitary operations. We demonstrate the unitary control through the implementation of the optimal fixed-point quantum search algorithm in a three-qubit nuclear magnetic resonance (NMR) system. Moreover, by combining the bang-bang pulses with the crusher gradients, we also demonstrate nonunitary transformations of thermal equilibrium states into effective pure states in three- as well as five-qubit NMR systems.

  9. Dynamic connectivity detection: an algorithm for determining functional connectivity change points in fMRI data

    PubMed Central

    Xu, Yuting; Lindquist, Martin A.

    2015-01-01

    Recently there has been an increased interest in using fMRI data to study the dynamic nature of brain connectivity. In this setting, the activity in a set of regions of interest (ROIs) is often modeled using a multivariate Gaussian distribution, with a mean vector and covariance matrix that are allowed to vary as the experiment progresses, representing changing brain states. In this work, we introduce the Dynamic Connectivity Detection (DCD) algorithm, which is a data-driven technique to detect temporal change points in functional connectivity, and estimate a graph between ROIs for data within each segment defined by the change points. DCD builds upon the framework of the recently developed Dynamic Connectivity Regression (DCR) algorithm, which has proven efficient at detecting changes in connectivity for problems consisting of a small to medium (< 50) number of regions, but which runs into computational problems as the number of regions becomes large (>100). The newly proposed DCD method is faster, requires less user input, and is better able to handle high-dimensional data. It overcomes the shortcomings of DCR by adopting a simplified sparse matrix estimation approach and a different hypothesis testing procedure to determine change points. The application of DCD to simulated data, as well as fMRI data, illustrates the efficacy of the proposed method. PMID:26388711

  10. Bayesian inference of decomposition rate of soil organic carbon using a turnover model and a hybrid method of particle filter and MH algorithm

    NASA Astrophysics Data System (ADS)

    Sakurai, G.; Jomura, M.; Yonemura, S.; Iizumi, T.; Shirato, Y.; Yokozawa, M.

    2010-12-01

    The soils of terrestrial ecosystems accumulate large amounts of carbon and the response of soil organic carbon (SOC) to global warming is of great concern in projections of future carbon cycling. While many theoretical and experimental studies have suggested that the decomposition rates of soil organic matters depend upon the physical and chemical conditions, land managements and so on, there has not yet been consensus in the dependencies. Most of the soil carbon turnover models for describing the SOC dynamics do not assume the differences in decomposition rates. The purpose of this study is to evaluate the decomposition rates of SOC based on a soil carbon turnover model, RothC, which describes SOC dynamics dividing it into compartments with different decomposition rates. In this study, reflecting that decomposition rate could change with time due to the fertility management in arable land, we used time-dependent Bayesian inference methods to allow time-change variation of the parameters. Thus, we used a hybrid method of particle filtering methods and MH algorithm. We applied this method to datasets obtained from three long-term experiments on time changes in total SOC at five sites over the Japan mainland. For each dataset, three treatments were examined: no N applied, chemical fertilizer applied, and chemical fertilizer and farmyard manure applied. We estimated parameters on the temperature and water dependent functions as well as the intrinsic decomposition rate for each compartment of RothC and for each treatment. As a result, it was shown that the temperature dependencies tended to decreased with the decomposability of the compartment, i.e. lower temperature dependency for more recalcitrant compartment of the model. On the other hand, the water dependencies were not determined with the SOC turnover rates of the compartments. Additionally, the intrinsic decomposition rates tended to increase with time especially in no N applied treatment. This result reflects

  11. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    NASA Astrophysics Data System (ADS)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  12. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  13. USER'S GUIDE FOR PAL 2.0: A GAUSSIAN-PLUME ALGORITHM FOR POINT, AREA, AND LINE SOURCES

    EPA Science Inventory

    PAL is an acronym for the Point, Area, and Line source algorithm. PAL is a method of estimating short-term dispersion using Gaussian-plume steady state assumptions. The algorithm can be used for estimating concentrations of non-reactive pollutants at 99 receptors for averaging ti...

  14. The Advantage of Implementing Martin's Noise Reduction Algorithm in Critical Bands Using Wavelet Packet Decomposition and Hilbert Transform

    NASA Astrophysics Data System (ADS)

    Omidi, Milad; Derakhshan, Nima; Hassan Savoji, Mohammad

    In this paper we address the problem of enhancing single channel speech signal corrupted with additive background noise. We present a new scheme which utilizes a different time frequency representation along with the psychoacoustic features of human ear and combines these features with the well-known noise estimation method of minimum tracking. Instead of Fourier transform, we use a perceptual wavelet packet decomposition of speech, and perform spectral tracking and filtering on the envelope of the analytic signal.

  15. Using SDO and GONG as Calibration References for a New Telescope Pointing Algorithm

    NASA Astrophysics Data System (ADS)

    Staiger, J.

    2013-12-01

    Long duration observations are a basic requirement for most types of helioseismic measurements. Pointing stability and the quality of guiding is thus an important issue with respect to the spatio-temporal analysis of any velocity datasets. Existing pointing tools and correlation-tracking devices will help to remove most of the spatial deviations building up during an observation with time. Yet most ground- and space-based high-resolution solar telescopes may be subject to slow image-plane drift that cannot be compensated for by guiding and which may accumulate to displacements of 10″ or more during a 10-hour recording. We have developed a new pointing model for solar telescopes that may overcome these inherent guiding-limitations. We have tested the model at the Vacuum Tower Telescope (VTT), Tenerife. We are using SDO and GONG full-disk imaging as a calibration reference. We describe the algorithms developed and used during the tests. We present our first results. We describe possible future applications as to be implemented at the VTT. So far, improvements over classical limb-guider systems by a factor of 10 or more seem possible.

  16. An algorithm for automatic detection of pole-like street furniture objects from Mobile Laser Scanner point clouds

    NASA Astrophysics Data System (ADS)

    Cabo, C.; Ordoñez, C.; García-Cortés, S.; Martínez, J.

    2014-01-01

    An algorithm for automatic extraction of pole-like street furniture objects using Mobile Laser Scanner data was developed and tested. The method consists in an initial simplification of the point cloud based on the regular voxelization of the space. The original point cloud is spatially discretized and a version of the point cloud whose amount of data represents 20-30% of the total is created. All the processes are carried out with the reduced version of the data, but the original point cloud is always accessible without any information loss, as each point is linked to its voxel. All the horizontal sections of the voxelized point cloud are analyzed and segmented separately. The two-dimensional fragments compatible with a section of a target pole are selected and grouped. Finally, the three-dimensional voxel representation of the detected pole-like objects is identified and the points from the original point cloud belonging to each pole-like object are extracted. The algorithm can be used with data from any Mobile Laser Scanning system, as it transforms the original point cloud and fits it into a regular grid, thus avoiding irregularities produced due to point density differences within the point cloud. The algorithm was tested in four test sites with different slopes and street shapes and features. All the target pole-like objects were detected, with the only exception of those severely occluded by large objects and some others which were either attached or too close to certain features.

  17. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    SciTech Connect

    Poynee, L A

    2003-05-06

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation.

  18. GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie

    2016-04-01

    Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.

  19. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  20. Sunspots and Coronal Bright Points Tracking using a Hybrid Algorithm of PSO and Active Contour Model

    NASA Astrophysics Data System (ADS)

    Dorotovic, I.; Shahamatnia, E.; Lorenc, M.; Rybansky, M.; Ribeiro, R. A.; Fonseca, J. M.

    2014-02-01

    In the last decades there has been a steady increase of high-resolution data, from ground-based and space-borne solar instruments, and also of solar data volume. These huge image archives require efficient automatic image processing software tools capable of detecting and tracking various features in the solar atmosphere. Results of application of such tools are essential for studies of solar activity evolution, climate change understanding and space weather prediction. The follow up of interplanetary and near-Earth phenomena requires, among others, automatic tracking algorithms that can determine where a feature is located, on successive images taken along the period of observation. Full-disc solar images, obtained both with the ground-based solar telescopes and the instruments onboard the satellites, provide essential observational material for solar physicists and space weather researchers for better understanding the Sun, studying the evolution of various features in the solar atmosphere, and also investigating solar differential rotation by tracking such features along time. Here we demonstrate and discuss the suitability of applying a hybrid Particle Swarm Optimization (PSO) algorithm and Active Contour model for tracking and determining the differential rotation of sunspots and coronal bright points (CBPs) on a set of selected solar images. The results obtained confirm that the proposed approach constitutes a promising tool for investigating the evolution of solar activity and also for automating tracking features on massive solar image archives.

  1. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    NASA Astrophysics Data System (ADS)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  2. Verification of dynamic initial pointing algorithm on two-dimensional rotating platform based on GPS/INS

    NASA Astrophysics Data System (ADS)

    Yang, Baohua; Wang, Juanjuan; Wang, Jian

    2015-10-01

    In order to achieve rapid establishment of long-distance laser communication links, it is an effective program to adopt a GPS / INS integrated navigation system (GINS) for completing the initial pointing of the dynamic laser communication. Firstly, we present a dynamic initial pointing algorithm (DIPA), which can be applied to get the pointing angle (PA) by calculating the real-time data received from GINS. Next, the feasibility of the pointing system is analyzed and the hardware system as well as PC software is designed. Then, experiments in the outdoor are carried out to prove the DIPA. Finally, the correctness and reliability of the pointing system is analyzed.

  3. Sensitivity of passive microwave sea ice concentration algorithms to the selection of locally and seasonally adjusted tie points

    NASA Technical Reports Server (NTRS)

    Steffen, Konrad; Schweiger, Axel

    1989-01-01

    The sensitivity of passive microwave sea-ice concentration (SIC) algorithms to the selection of tie points was analyzed. SICs were derived with the NASA Team ice algorithm for global tie points and for locally and seasonally adjusted tie points. The SSM/I SIC was then compared to Landsat-MSS-derived SICs. Preliminary results show a mean difference of SSM/I- and Landsat-derived SICs for 50 x 50 km grid cells of 2.7 percent along the ice edge of the Beaufort Sea during fall with local tie points. The accuracy decreased to 9.7 percent when global tie points were used. During freeze-up in the Beaufort Sea, with grey ice and nilas as dominant ice cover, the mean difference was 4.3 percent for local tie points and 13.9 percent for global tie points. For the spring ice cover in the Bering Sea a mean difference of 4.4 percent for local tie points and 15.7 percent for global tie points was found. This large difference reveals some limitations of the NASA-Team algorithm under freeze-up and spring conditions (thin ice areas).

  4. Joint inversion of T1-T2 spectrum combining the iterative truncated singular value decomposition and the parallel particle swarm optimization algorithms

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Wang, Hua; Fan, Yiren; Cao, Yingchang; Chen, Hua; Huang, Rui

    2016-01-01

    With more information than the conventional one dimensional (1D) longitudinal relaxation time (T1) and transversal relaxation time (T2) spectrums, a two dimensional (2D) T1-T2 spectrum in a low field nuclear magnetic resonance (NMR) is developed to discriminate the relaxation components of fluids such as water, oil and gas in porous rock. However, the accuracy and efficiency of the T1-T2 spectrum are limited by the existing inversion algorithms and data acquisition schemes. We introduce a joint method to inverse the T1-T2 spectrum, which combines iterative truncated singular value decomposition (TSVD) and a parallel particle swarm optimization (PSO) algorithm to get fast computational speed and stable solutions. We reorganize the first kind Fredholm integral equation of two kernels to a nonlinear optimization problem with non-negative constraints, and then solve the ill-conditioned problem by the iterative TSVD. Truncating positions of the two diagonal matrices are obtained by the Akaike information criterion (AIC). With the initial values obtained by TSVD, we use a PSO with parallel structure to get the global optimal solutions with a high computational speed. We use the synthetic data with different signal to noise ratio (SNR) to test the performance of the proposed method. The result shows that the new inversion algorithm can achieve favorable solutions for signals with SNR larger than 10, and the inversion precision increases with the decrease of the components of the porous rock.

  5. [Simultaneous resolution and determination of tyrosine, tryptophan and phenylalanine by alternating penalty trilinear decomposition algorithm coupled with 3D emission-excitation matrix fluorometry].

    PubMed

    Xiao, Jin; Ren, Feng-lian; Song, Ge; Liao, Lü; Yu, Wen-feng; Zeng, Tao

    2007-10-01

    A new method using alternating penalty trilinear decomposition algorithm coupled with excitation-emission matrix fluorometry has been developed for simultaneous resolution and determination of tyrosine, phenylalanine and tryptophan. Their correlation coefficients were 0.9987, 0.9995 and 0.9993 respectively. The contents of tyrosine, phenylalanine and tryptophan in Hibiscus syriacus L. leaves were also be determined by this method after being extracted by ultrasonic. The coefficients of variation and the recoveries of the three amino acids were 0.84%, 0.36%, 1.59% and 101.0%-92.7%, 106.5%-93.0%, 103.0%-95.0% respectively. All these show that this is a simple, fast and cridible method. PMID:18306802

  6. Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data

    NASA Astrophysics Data System (ADS)

    Alcolea, Andres; Renard, Philippe

    2010-08-01

    Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations

  7. Parallel feedback active noise control of MRI acoustic noise with signal decomposition using hybrid RLS-NLMS adaptive algorithms.

    PubMed

    Ganguly, Anshuman; Krishna Vemuri, Sri Hari; Panahi, Issa

    2014-01-01

    This paper presents a cost-effective adaptive feedback Active Noise Control (FANC) method for controlling functional Magnetic Resonance Imaging (fMRI) acoustic noise by decomposing it into dominant periodic components and residual random components. Periodicity of fMRI acoustic noise is exploited by using linear prediction (LP) filtering to achieve signal decomposition. A hybrid combination of adaptive filters-Recursive Least Squares (RLS) and Normalized Least Mean Squares (NLMS) are then used to effectively control each component separately. Performance of the proposed FANC system is analyzed and Noise attenuation levels (NAL) up to 32.27 dB obtained by simulation are presented which confirm the effectiveness of the proposed FANC method. PMID:25570676

  8. Decomposition of MATLAB script for FPGA implementation of real time simulation algorithms for LLRF system in European XFEL

    NASA Astrophysics Data System (ADS)

    Bujnowski, K.; Pucyk, P.; Pozniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The European XFEL project uses the LLRF system for stabilization of a vector sum of the RF field in 32 superconducting cavities. A dedicated, high performance photonics and electronics and software was built. To provide high system availability an appropriate test environment as well as diagnostics was designed. A real time simulation subsystem was designed which is based on dedicated electronics using FPGA technology and robust simulation models implemented in VHDL. The paper presents an architecture of the system framework which allows for easy and flexible conversion of MATLAB language structures directly into FPGA implementable grid of parameterized and simple DSP processors. The decomposition of MATLAB grammar was described as well as optimization process and FPGA implementation issues.

  9. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  10. Melting point prediction employing k-nearest neighbor algorithms and genetic parameter optimization.

    PubMed

    Nigsch, Florian; Bender, Andreas; van Buuren, Bernd; Tissen, Jos; Nigsch, Eduard; Mitchell, John B O

    2006-01-01

    We have applied the k-nearest neighbor (kNN) modeling technique to the prediction of melting points. A data set of 4119 diverse organic molecules (data set 1) and an additional set of 277 drugs (data set 2) were used to compare performance in different regions of chemical space, and we investigated the influence of the number of nearest neighbors using different types of molecular descriptors. To compute the prediction on the basis of the melting temperatures of the nearest neighbors, we used four different methods (arithmetic and geometric average, inverse distance weighting, and exponential weighting), of which the exponential weighting scheme yielded the best results. We assessed our model via a 25-fold Monte Carlo cross-validation (with approximately 30% of the total data as a test set) and optimized it using a genetic algorithm. Predictions for drugs based on drugs (separate training and test sets each taken from data set 2) were found to be considerably better [root-mean-squared error (RMSE)=46.3 degrees C, r2=0.30] than those based on nondrugs (prediction of data set 2 based on the training set from data set 1, RMSE=50.3 degrees C, r2=0.20). The optimized model yields an average RMSE as low as 46.2 degrees C (r2=0.49) for data set 1, and an average RMSE of 42.2 degrees C (r2=0.42) for data set 2. It is shown that the kNN method inherently introduces a systematic error in melting point prediction. Much of the remaining error can be attributed to the lack of information about interactions in the liquid state, which are not well-captured by molecular descriptors. PMID:17125183

  11. Nested Taylor decomposition in multivariate function decomposition

    NASA Astrophysics Data System (ADS)

    Baykara, N. A.; Gürvit, Ercan

    2014-12-01

    Fluctuationlessness approximation applied to the remainder term of a Taylor decomposition expressed in integral form is already used in many articles. Some forms of multi-point Taylor expansion also are considered in some articles. This work is somehow a combination these where the Taylor decomposition of a function is taken where the remainder is expressed in integral form. Then the integrand is decomposed to Taylor again, not necessarily around the same point as the first decomposition and a second remainder is obtained. After taking into consideration the necessary change of variables and converting the integration limits to the universal [0;1] interval a multiple integration system formed by a multivariate function is formed. Then it is intended to apply the Fluctuationlessness approximation to each of these integrals one by one and get better results as compared with the single node Taylor decomposition on which the Fluctuationlessness is applied.

  12. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    presentehat shows the readback delay does not have a negative impact on gimbal control. The decision was made to consider implementing two of the jitter mitigation techniques on board the spacecraft: stagger stepping and the NSR. Flight data from two sets of handovers, one set without jitter mitigation and the other with mitigation enabled, were examined. The trajectory of the predicted handover was compared with the measured trajectory for the two cases, showing that tracking was not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. In this paper, the flight results are examined from a test where the HGAs are following the path of a nominal handover with stagger stepping on and HMI NSRs enabled. In this case, the reaction wheels are moving at low speed and the instruments are taking pictures in their standard sequence. The flight data shows the level of jitter that the instruments see when their shutters are open. The HGA-induced jitter is well within the jitter requirement when the stagger step and NSR mitigation options are enabled. The SDO HGA pointing algorithm was designed to achieve nominal antenna pointing at the ground station, perform slews during handover season, and provide three HGA-induced jitter mitigation options without compromising pointing objectives. During the commissioning phase, flight data sets were collected to verify the HGA pointing algorithm and demonstrate its jitter mitigation capabilities.

  13. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  14. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  15. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis

    NASA Astrophysics Data System (ADS)

    Portes, Leonardo L.; Aguirre, Luis A.

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011), 10.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA.

  16. Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis.

    PubMed

    Portes, Leonardo L; Aguirre, Luis A

    2016-05-01

    Groth and Ghil [Phys. Rev. E 84, 036206 (2011)PLEEE81539-375510.1103/PhysRevE.84.036206] developed a modified varimax rotation aimed at enhancing the ability of the multivariate singular spectrum analysis (M-SSA) to characterize phase synchronization in systems of coupled chaotic oscillators. Due to the special structure of the M-SSA eigenvectors, the modification proposed by Groth and Ghil imposes a constraint in the rotation of blocks of components associated with the different subsystems. Accordingly, here we call it a structured varimax rotation (SVR). The SVR was presented as successive pairwise rotations of the eigenvectors. The aim of this paper is threefold. First, we develop a closed matrix formulation for the entire family of structured orthomax rotation criteria, for which the SVR is a special case. Second, this matrix approach is used to enable the use of known singular value algorithms for fast computation, allowing a simultaneous rotation of the M-SSA eigenvectors (a Python code is provided in the Appendix). This could be critical in the characterization of phase synchronization phenomena in large real systems of coupled oscillators. Furthermore, the closed algebraic matrix formulation could be used in theoretical studies of the (modified) M-SSA approach. Third, we illustrate the use of the proposed singular value algorithm for the SVR in the context of the two benchmark examples of Groth and Ghil: the Rössler system in the chaotic (i) phase-coherent and (ii) funnel regimes. Comparison with the results obtained with Kaiser's original (unstructured) varimax rotation (UVR) reveals that both SVR and UVR give the same result for the phase-coherent scenario, but for the more complex behavior (ii) only the SVR improves on the M-SSA. PMID:27300889

  17. Building optimal regression tree by ant colony system-genetic algorithm: application to modeling of melting points.

    PubMed

    Hemmateenejad, Bahram; Shamsipur, Mojtaba; Zare-Shahabadi, Vali; Akhond, Morteza

    2011-10-17

    The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure. PMID:21907021

  18. a New Control Points Based Geometric Correction Algorithm for Airborne Push Broom Scanner Images Without On-Board Data

    NASA Astrophysics Data System (ADS)

    Strakhov, P.; Badasen, E.; Shurygin, B.; Kondranin, T.

    2016-06-01

    Push broom scanners, such as video spectrometers (also called hyperspectral sensors), are widely used in the present. Usage of scanned images requires accurate geometric correction, which becomes complicated when imaging platform is airborne. This work contains detailed description of a new algorithm developed for processing of such images. The algorithm requires only user provided control points and is able to correct distortions caused by yaw, flight speed and height changes. It was tested on two series of airborne images and yielded RMS error values on the order of 7 meters (3-6 source image pixels) as compared to 13 meters for polynomial-based correction.

  19. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  20. The collapsed cone algorithm for 192Ir dosimetry using phantom-size adaptive multiple-scatter point kernels

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-01

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  1. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    PubMed

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-01

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  2. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    NASA Astrophysics Data System (ADS)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  3. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    SciTech Connect

    Atanassov, E.; Dimitrov, D. E-mail: emanouil@parallel.bas.bg Gurov, T.

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  4. A uniform energy consumption algorithm for wireless sensor and actuator networks based on dynamic polling point selection.

    PubMed

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2013-01-01

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation. PMID:24451455

  5. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  6. Locating critical points on multi-dimensional surfaces by genetic algorithm: test cases including normal and perturbed argon clusters

    NASA Astrophysics Data System (ADS)

    Chaudhury, Pinaki; Bhattacharyya, S. P.

    1999-03-01

    It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.

  7. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  8. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  9. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  10. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  11. An Algorithm for Correcting CTE Loss in Spectrophotometry of Point Sources with the STIS CCD

    NASA Astrophysics Data System (ADS)

    Bohlin, Ralph; Goudfrooij, Paul

    2003-08-01

    The correction for the change in sensitivity with time for the STIS CCD modes is complicated by the gradual loss of charge transfer efficiency (CTE) of the CCD. The amount of this CTE loss depends on time in orbit, the location on the CCD chip with respect to the readout amplifier, the stellar signal strength, and the background level. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (tungsten lamp images taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual stellar spectra in the first order CCD modes. The main complication is the quantification of the roll-off of the CTE losses for weak stellar signals on non-zero backgrounds. This roll-off term is determined by relatively short exposures of primary standard stars along with the G750L series of properly exposed AGK+81D266 monitoring data, where the observed changes in response over time are primarily CTE losses and not sensitivity degradations. After accounting for CTE losses and after an iterative determination of the optical system throughput losses, the CTE correction algorithm is verified by comparing G230L MAMA fluxes of faint standard stars with G430L fluxes in the overlap region around 3000Å. For spectra at the standard reference position at the CCD center, CTE losses as big as 20% are corrected to within 1% at high signal levels and with a precision of ~2% at ~100 electrons after application of the algorithm presented here.

  12. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    SciTech Connect

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill; Chand, Kyle

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.

  13. An Evaluation of Vegetation Filtering Algorithms for Improved Snow Depth Estimation from Point Cloud Observations in Mountain Environments

    NASA Astrophysics Data System (ADS)

    Vanderjagt, B. J.; Durand, M. T.; Lucieer, A.; Wallace, L.

    2014-12-01

    High-resolution snow depth measurements are possible through bare-earth (BE) differencing of point cloud datasets obtained using LiDAR and photogrammetry during snow-free and snow-covered conditions. These accuracy and resolution of these snow depth measurements are desirable in mountain environments in which ground measurements are dangerous and difficult to perform, and other remote sensing techniques are often characterized by large errors and uncertainties due variable topography, vegetation, and snow properties. BE ground filtering algorithms make different assumptions about ground characteristics to differentiate between ground and non-ground features. Because of this, ground surfaces may have unique characteristics that confound ground filters depending on the location and terrain conditions. These include low-lying shrubs (<1 m), areas with high topographic relief, and areas with high surface roughness. We evaluate several different algorithms, including lowest point, kriging, and more sophisticated splining techniques such as the Multiscale Curvature Classification (MCC) to resolve snow depths. Understanding how these factors affect BE surface models and thus snow depth measurements is a valuable contribution towards improving the processing protocols associated with these relatively new snow observation techniques. We test the different BE filtering algorithms using LiDAR and photogrammetric measurements taken from an Unmanned Aerial Vehicle (UAV) in Southwest Tasmania, Australia during the winter and spring of 2013. The study area is characterized by sloping, uneven terrain, and different types of vegetation including eucalyptus and conifer trees, as well as dense shrubs varying in heights from 0.3-1.5 meters. Initial snow depth measurements using the unfiltered point cloud measurements are characterized by large errors (~20-90 cm) due to the dense vegetation. Using filtering techniques instead of raw differencing improves the estimation of snow depth in

  14. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    NASA Astrophysics Data System (ADS)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  15. An Automatic Algorithm for Minimizing Anomalies and Discrepancies in Point Clouds Acquired by Laser Scanning Technique

    NASA Astrophysics Data System (ADS)

    Bordin, Fabiane; Gonzaga, Luiz, Jr.; Galhardo Muller, Fabricio; Veronez, Mauricio Roberto; Scaioni, Marco

    2016-06-01

    Laser scanning technique from airborne and land platforms has been largely used for collecting 3D data in large volumes in the field of geosciences. Furthermore, the laser pulse intensity has been widely exploited to analyze and classify rocks and biomass, and for carbon storage estimation. In general, a laser beam is emitted, collides with targets and only a percentage of emitted beam returns according to intrinsic properties of each target. Also, due interferences and partial collisions, the laser return intensity can be incorrect, introducing serious errors in classification and/or estimation processes. To address this problem and avoid misclassification and estimation errors, we have proposed a new algorithm to correct return intensity for laser scanning sensors. Different case studies have been used to evaluate and validated proposed approach.

  16. A Unique Computational Algorithm to Simulate Probabilistic Multi-Factor Interaction Model Complex Material Point Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2010-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points--the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  17. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  18. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGESBeta

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  19. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    NASA Astrophysics Data System (ADS)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-01

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.

  20. ParaStream: A parallel streaming Delaunay triangulation algorithm for LiDAR points on multicore architectures

    NASA Astrophysics Data System (ADS)

    Wu, Huayi; Guan, Xuefeng; Gong, Jianya

    2011-09-01

    This paper presents a robust parallel Delaunay triangulation algorithm called ParaStream for processing billions of points from nonoverlapped block LiDAR files. The algorithm targets ubiquitous multicore architectures. ParaStream integrates streaming computation with a traditional divide-and-conquer scheme, in which additional erase steps are implemented to reduce the runtime memory footprint. Furthermore, a kd-tree-based dynamic schedule strategy is also proposed to distribute triangulation and merging work onto the processor cores for improved load balance. ParaStream exploits most of the computing power of multicore platforms through parallel computing, demonstrating qualities of high data throughput as well as a low memory footprint. Experiments on a 2-Way-Quad-Core Intel Xeon platform show that ParaStream can triangulate approximately one billion LiDAR points (16.4 GB) in about 16 min with only 600 MB physical memory. The total speedup (including I/O time) is about 6.62 with 8 concurrent threads.

  1. Decomposing Nekrasov decomposition

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Zenkevich, Y.

    2016-02-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair "interaction" is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  2. Industrial experience of process identification and set-point decision algorithm in a full-scale treatment plant.

    PubMed

    Yoo, Changkyoo; Kim, Min Han

    2009-06-01

    This paper presents industrial experience of process identification, monitoring, and control in a full-scale wastewater treatment plant. The objectives of this study were (1) to apply and compare different process-identification methods of proportional-integral-derivative (PID) autotuning for stable dissolved oxygen (DO) control, (2) to implement a process monitoring method that estimates the respiration rate simultaneously during the process-identification step, and (3) to propose a simple set-point decision algorithm for determining the appropriate set point of the DO controller for optimal operation of the aeration basin. The proposed method was evaluated in the industrial wastewater treatment facility of an iron- and steel-making plant. Among the process-identification methods, the control signal of the controller's set-point change was best for identifying low-frequency information and enhancing the robustness to low-frequency disturbances. Combined automatic control and set-point decision method reduced the total electricity consumption by 5% and the electricity cost by 15% compared to the fixed gain PID controller, when considering only the surface aerators. Moreover, as a result of improved control performance, the fluctuation of effluent quality decreased and overall effluent water quality was better. PMID:19428173

  3. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  4. Therapy Algorithm for Portal Vein Thrombosis in Liver Cirrhosis: The Internist's Point of View

    PubMed Central

    Rössle, Martin; Bausch, Birke; Klinger, Christoph

    2014-01-01

    Background Treatment of non-malignant portal vein thrombosis (PVT) in patients with cirrhosis has been neglected in the past because of the fear of bleeding complications when using anticoagulation and due to the technical difficulties associated with the implantation of the transjugular intrahepatic portosystemic shunt (TIPS). However, PVT has a negative impact on outcome and compromises liver transplantation, warranting treatment by using anticoagulation and TIPS. Methods This review considers studies on the treatment of PVT in cirrhosis published in the last 10 years. Unfortunately, many of these studies are limited by their retrospective design and a small sample size. Results Anticoagulation using low-molecular-weight heparin (LMWH) or vitamin K antagonists is effective in the treatment of patients with limited and recent PVT, resulting in a recanalization in up to 50% of the patients. TIPS (plus local measures) results in a recanalization of up to 100% and reduces the rebleeding rate considerably in patients with recent or chronic PVT. Conclusion Based on the presently limited knowledge, a therapy algorithm is suggested favouring the TIPS as a first-line treatment for PVT in patients with symptomatic portal hypertension. Patients with thus far asymptomatic portal hypertension may first receive anticoagulation, preferably using LMWH. If these patients have a condition where anticoagulation is not promising (complete, extended, chronic PVT) or ineffective, or if they are candidates for liver transplantation, the TIPS may be implanted without delay. PMID:26288607

  5. A novel multi-aperture based sun sensor based on a fast multi-point MEANSHIFT (FMMS) algorithm.

    PubMed

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  6. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  7. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data

    PubMed Central

    Banda, Jorge A.; Haydel, K. Farish; Davila, Tania; Desai, Manisha; Haskell, William L.; Matheson, Donna; Robinson, Thomas N.

    2016-01-01

    Objective To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). Methods 268 7–11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4–7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. Results WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001), but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001). Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001). Conclusions The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy. PMID:26938240

  8. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  9. Ozone decomposition.

    PubMed

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho; Zaikov, Gennadi E

    2014-06-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  10. [Comparative Study on the Three Algorithms of T-wave End Detection: Wavelet Method, Cumulative Points Area Method and Trapezium Area Method].

    PubMed

    Li, Chengtao; Zhang, Yongliang; He, Zijun; Ye, Jun; Hu, Fusong; Ma, Zuchang; Wang, Jingzhi

    2015-12-01

    In order to find the most suitable algorithm of T-wave end point detection for clinical detection, we tested three methods, which are not just dependent on the threshold value of T-wave end point detection, i. e. wavelet method, cumulative point area method and trapezium area method, in PhysioNet QT database (20 records with 3 569 beats each). We analyzed and compared their detection performance. First, we used the wavelet method to locate the QRS complex and T-wave. Then we divided the T-wave into four morphologies, and we used the three algorithms mentioned above to detect T-wave end point. Finally, we proposed an adaptive selection T-wave end point detection algorithm based on T-wave morphology and tested it with experiments. The results showed that this adaptive selection method had better detection performance than that of the single T-wave end point detection algorithm. The sensitivity, positive predictive value and the average time errors were 98.93%, 99.11% and (--2.33 ± 19.70) ms, respectively. Consequently, it can be concluded that the adaptive selection algorithm based on T-wave morphology improves the efficiency of T-wave end point detection. PMID:27079084

  11. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO), launched in 2010, is a NASA-designed spacecraft built to study the Sun. SDO has tight pointing requirements and instruments that are sensitive to spacecraft jitter. Two High Gain Antennas (HGAs) are used to continuously send science data to a dedicated ground station. Preflight analysis showed that jitter resulting from motion of the HGAs was a cause for concern. Three jitter mitigation techniques were developed and implemented to overcome effects of jitter from different sources. These mitigation techniques include: the random step delay, stagger stepping, and the No Step Request (NSR). During the commissioning phase of the mission, a jitter test was performed onboard the spacecraft, in which various sources of jitter were examined to determine their level of effect on the instruments. During the HGA portion of the test, the jitter amplitudes from the single step of a gimbal were examined, as well as the amplitudes due to the execution of various gimbal rates. The jitter levels were compared with the gimbal jitter allocations for each instrument. The decision was made to consider implementing two of the jitter mitigating techniques on board the spacecraft: stagger stepping and the NSR. Flight data with and without jitter mitigation enabled was examined, and it is shown in this paper that HGA tracking is not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. The HGA-induced jitter on the instruments is well within the jitter requirement when the stagger step and NSR mitigation options are enabled.

  12. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  13. DHARMA - Discriminant hyperplane abstracting residuals minimization algorithm for separating clusters with fuzzy boundaries. [data points pattern recognition technique

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    Learning of discriminant hyperplanes in imperfectly supervised or unsupervised training sample sets with unreliably labeled samples along the fuzzy joint boundaries between sample clusters is discussed, with the discriminant hyperplane designed to be a least-squares fit to the unreliably labeled data points. (Samples along the fuzzy boundary jump back and forth from one cluster to the other in recursive cluster stabilization and are considered unreliably labeled.) Minimization of the distances of these unreliably labeled samples from the hyperplanes does not sacrifice the ability to discriminate between classes represented by reliably labeled subsets of samples. An equivalent unconstrained linear inequality problem is formulated and algorithms for its solution are indicated. Landsat earth sensing data were used in confirming the validity and computational feasibility of the approach, which should be useful in deriving discriminant hyperplanes separating clusters with fuzzy boundaries, given supervised training sample sets with unreliably labeled boundary samples.

  14. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p<0.01); and the correlation coefficient of tree crown volume between V(VC) derived from new method and V(C) by the formula of a regular body is 0.960 (p<0.001). The results also show that the average of V(C) is smaller than that of V(VC) at the rate of 8.03%, and the average of A4 is larger than that of A(V) at the rate of 25.5%. Assumed Av and V(VC) as ture values, the deviations of the new method could be attributed to irregularity of the crowns' silhouettes. Different morphological characteristics of tree crown led to measurement error in forest simple plot survey. Based on the results, the paper proposes that: (1) the use of eight-point or sixteen-point projection with

  15. Woodland Decomposition.

    ERIC Educational Resources Information Center

    Napier, J.

    1988-01-01

    Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW)

  16. CD4 Count Outperforms World Health Organization Clinical Algorithm for Point-of Care HIV Diagnosis among Hospitalized HIV-exposed Malawian Infants

    PubMed Central

    Maliwichi, Madalitso; Rosenberg, Nora E.; Macfie, Rebekah; Olson, Dan; Hoffman, Irving; van der Horst, Charles M.; Kazembe, Peter N.; Hosseinipour, Mina C.; McCollum, Eric D.

    2014-01-01

    Objective To determine, for the WHO algorithm for point-of-care diagnosis of HIV infection, the agreement levels between pediatricians and non-physician clinicians, and to compare sensitivity and specificity profiles of the WHO algorithm and different CD4 thresholds against HIV PCR testing in hospitalized Malawian infants. Methods In 2011, hospitalized HIV-exposed infants <12 months in Lilongwe, Malawi were evaluated independently with the WHO algorithm by both a pediatrician and clinical officer. Blood was collected for CD4 and molecular HIV testing (DNA or RNA PCR). Using molecular testing as the reference, sensitivity, specificity, and positive predictive value (PPV) were determined for the WHO algorithm and CD4 count thresholds of 1500 and 2000 cells/mm3 by pediatricians and clinical officers. Results We enrolled 166 infants (50% female, 34% <2 months, 37% HIV-infected). Sensitivity was higher using CD4 thresholds (<1500, 80%; <2000, 95%) than with the algorithm (physicians, 57%; clinical officers, 71%). Specificity was comparable for CD4 thresholds (<1500, 68%, <2000, 50%) and the algorithm (pediatricians, 55%, clinical officers, 50%). The positive predictive values were slightly better using CD4 thresholds (<1500, 59%, <2000, 52%) than the algorithm (pediatricians, 43%, clinical officers 45%) at this prevalence. Conclusion Performance by the WHO algorithm and CD4 thresholds resulted in many misclassifications. Point-of-care CD4 thresholds of <1500 cells/mm3 or <2000 cells/mm3 could identify more HIV-infected infants with fewer false positives than the algorithm. However, a point-of-care option with better performance characteristics is needed for accurate, timely HIV diagnosis. PMID:24754543

  17. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    Energy Science and Technology Software Center (ESTSC)

    2012-05-31

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  18. Free Shape Context Descriptors Optimized with Genetic Algorithm for the Detection of Dead Tree Trunks in ALS Point Clouds

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Yao, W.; Heurich, M.; Krzystek, P.; Stilla, U.

    2015-08-01

    In this paper, a new family of shape descriptors called Free Shape Contexts (FSC) is introduced to generalize the existing 3D Shape Contexts. The FSC introduces more degrees of freedom than its predecessor by allowing the level of complexity to vary between its parts. Also, each part of the FSC has an associated activity state which controls whether the part can contribute a feature value. We describe a method of evolving the FSC parameters for the purpose of creating highly discriminative features suitable for detecting specific objects in sparse point clouds. The evolutionary process is built on a genetic algorithm (GA) which optimizes the parameters with respect to cross-validated overall classification accuracy. The GA manipulates both the structure of the FSC and the activity flags, allowing it to perform an implicit feature selection alongside the structure optimization by turning off segments which do not augment the discriminative capabilities. We apply the proposed descriptor to the problem of detecting single standing dead tree trunks from ALS point clouds. The experiment, carried out on a set of 285 objects, reveals that an FSC optimized through a GA with manually tuned recombination parameters is able to attain a classification accuracy of 84.2%, yielding an increase of 4.2 pp compared to features derived from eigenvalues of the 3D covariance matrix. Also, we address the issue of automatically tuning the GA recombination metaparameters. For this purpose, a fuzzy logic controller (FLC) which dynamically adjusts the magnitude of the recombination effects is co-evolved with the FSC parameters in a two-tier evolution scheme. We find that it is possible to obtain an FLC which retains the classification accuracy of the manually tuned variant, thereby limiting the need for guessing the appropriate meta-parameter values.

  19. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-01

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. PMID:25345784

  20. Bridging Proper Orthogonal Decomposition methods and augmented Newton-Krylov algorithms: an adaptive model order reduction for highly nonlinear mechanical problems

    PubMed Central

    Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.

    2013-01-01

    This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688

  1. Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems

    SciTech Connect

    O'Leary, Dianne P.; Tits, Andre

    2014-04-03

    Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.

  2. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration

    PubMed Central

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods. PMID:26881433

  3. Fixed-point single-precision estimation. [Kalman filtering for NASA Standard Spacecraft Computer orbit determination algorithm

    NASA Technical Reports Server (NTRS)

    Thompson, E. H.; Farrell, J. L.

    1976-01-01

    Monte Carlo simulation of autonomous orbit determination has validated the use of an 18-bit NASA Standard Spacecraft Computer (NSSC) for the extended Kalman filter. Dimensionally consistent scales are chosen for all variables in the algorithm, such that nearly all of the onboard computation can be performed in single precision without matrix square root formulations. Allowable simplifications in algorithm implementation and practical means of ensuring convergence are verified for accuracies of a few km provided by star/vertical observations

  4. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore

  5. TRIANGLE-SHAPED DC CORONA DISCHARGE DEVICE FOR MOLECULAR DECOMPOSITION

    EPA Science Inventory

    The paper discusses the evaluation of electrostatic DC corona discharge devices for the application of molecular decomposition. A point-to-plane geometry corona device with a rectangular cross section demonstrated low decomposition efficiencies in earlier experimental work. The n...

  6. A double-loop structure in the adaptive generalized predictive control algorithm for control of robot end-point contact force.

    PubMed

    Wen, Shuhuan; Zhu, Jinghai; Li, Xiaoli; Chen, Shengyong

    2014-09-01

    Robot force control is an essential issue in robotic intelligence. There is much high uncertainty when robot end-effector contacts with the environment. Because of the environment stiffness effects on the system of the robot end-effector contact with environment, the adaptive generalized predictive control algorithm based on quantitative feedback theory is designed for robot end-point contact force system. The controller of the internal loop is designed on the foundation of QFT to control the uncertainty of the system. An adaptive GPC algorithm is used to design external loop controller to improve the performance and the robustness of the system. Two closed loops used in the design approach realize the system׳s performance and improve the robustness. The simulation results show that the algorithm of the robot end-effector contacting force control system is effective. PMID:24973336

  7. Study of the decomposition of SF6 under dc negative polarity corona discharges (point-to-plane geometry): Influence of the metal constituting the plane electrode

    NASA Astrophysics Data System (ADS)

    Casanovas, A. M.; Casanovas, J.; Lagarde, F.; Belarbi, A.

    1992-10-01

    SF6 samples (PSF6=100 or 200 kPa) were submitted to point-to-plane dc negative polarity corona discharges in the presence of water [concentration=2000 ppmv (parts per million by volume)] or without the addition of water. The stable gaseous byproducts formed, (SO2F2, SOF2, and S2F10) were assayed by gas-phase chromatography. The variation of their yields against the charge transported (up to 10 C) was studied for two metals (aluminum and stainless steel) constituting the plane electrode, at various values of the SF6 pressure, the water content, the gap spacing (2.5 and 8 mm), and the discharge current [12≤Ī (μA)≤25]. The results indicate an important effect of the metal constituting the plane electrode and of the moisture conditions, particularly on the production of SOF2 and S2F10.

  8. Convergence Analysis of a Domain Decomposition Paradigm

    SciTech Connect

    Bank, R E; Vassilevski, P S

    2006-06-12

    We describe a domain decomposition algorithm for use in several variants of the parallel adaptive meshing paradigm of Bank and Holst. This algorithm has low communication, makes extensive use of existing sequential solvers, and exploits in several important ways data generated as part of the adaptive meshing paradigm. We show that for an idealized version of the algorithm, the rate of convergence is independent of both the global problem size N and the number of subdomains p used in the domain decomposition partition. Numerical examples illustrate the effectiveness of the procedure.

  9. Critical analysis of nitramine decomposition data: Activation energies and frequency factors for HMX and RDX decomposition

    NASA Technical Reports Server (NTRS)

    Schroeder, M. A.

    1980-01-01

    A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.

  10. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

    NASA Astrophysics Data System (ADS)

    Siegel, J.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!