Science.gov

Sample records for algorithm called fast

  1. Fixed Point Implementations of Fast Kalman Algorithms.

    DTIC Science & Technology

    1983-11-01

    fined point multiply. ve &geete a meatn ’C.- nero. variance N random vector s~t) each time weAfilter is said to be 12 Scaled if udae 8(t+11t0 - 3-1* AS...nl.v by bl ’k rn.b.) 20 AST iA C T ’Cnnin to .- a , o. a ide It .,oco ea ry and Idenuty by block number) In this paper we study scaling rules and round...realized in a -fast form that uses the so-called fast Kalman gain algorithm. The algorithm for the gain is fixed point. Scaling rules and expressions for

  2. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  3. TADtool: visual parameter identification for TAD-calling algorithms.

    PubMed

    Kruse, Kai; Hug, Clemens B; Hernández-Rodríguez, Benjamín; Vaquerizas, Juan M

    2016-10-15

    Eukaryotic genomes are hierarchically organized into topologically associating domains (TADs). The computational identification of these domains and their associated properties critically depends on the choice of suitable parameters of TAD-calling algorithms. To reduce the element of trial-and-error in parameter selection, we have developed TADtool: an interactive plot to find robust TAD-calling parameters with immediate visual feedback. TADtool allows the direct export of TADs called with a chosen set of parameters for two of the most common TAD calling algorithms: directionality and insulation index. It can be used as an intuitive, standalone application or as a Python package for maximum flexibility.

  4. Fast autodidactic adaptive equalization algorithms

    NASA Astrophysics Data System (ADS)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  5. Are Wait-Free Algorithms Fast

    DTIC Science & Technology

    1991-03-01

    AMASSACHUSETTSLABORATORY FOR INSTITUTE OFCOMPUTER SCIENCE TECHNOLOGY MIT/ILCS/TM-442 m ARE WAIT-FREE N ALGORITHMS FAST?I DTICSELECTE MAR27199111 ~D...Hagit Attiya Nancy Lynch Nir Shavit Approve to T - e eT A March 1991 545 TECHNOLOGY SQUARE, CAMBRIDGE, MASSACHUSETTS 02139 22 089 igclassified XICURITY...ADDRESS (City, State, and ZIP Code) 7b. ADDRESS (ft) State, and ZIP Code) 545 Technology Square Information Systems Program Cambridge, MA 02139

  6. MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  7. Fast deterministic algorithm for EEE components classification

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, L. A.; Antamoshkin, A. N.; Masich, I. S.

    2015-10-01

    Authors consider the problem of automatic classification of the electronic, electrical and electromechanical (EEE) components based on results of the test control. Electronic components of the same type used in a high- quality unit must be produced as a single production batch from a single batch of the raw materials. Data of the test control are used for splitting a shipped lot of the components into several classes representing the production batches. Methods such as k-means++ clustering or evolutionary algorithms combine local search and random search heuristics. The proposed fast algorithm returns a unique result for each data set. The result is comparatively precise. If the data processing is performed by the customer of the EEE components, this feature of the algorithm allows easy checking of the results by a producer or supplier.

  8. Fast box-counting algorithm on GPU.

    PubMed

    Jiménez, J; Ruiz de Miras, J

    2012-12-01

    The box-counting algorithm is one of the most widely used methods for calculating the fractal dimension (FD). The FD has many image analysis applications in the biomedical field, where it has been used extensively to characterize a wide range of medical signals. However, computing the FD for large images, especially in 3D, is a time consuming process. In this paper we present a fast parallel version of the box-counting algorithm, which has been coded in CUDA for execution on the Graphic Processing Unit (GPU). The optimized GPU implementation achieved an average speedup of 28 times (28×) compared to a mono-threaded CPU implementation, and an average speedup of 7 times (7×) compared to a multi-threaded CPU implementation. The performance of our improved box-counting algorithm has been tested with 3D models with different complexity, features and sizes. The validity and accuracy of the algorithm has been confirmed using models with well-known FD values. As a case study, a 3D FD analysis of several brain tissues has been performed using our GPU box-counting algorithm.

  9. FOGSAA: Fast Optimal Global Sequence Alignment Algorithm

    NASA Astrophysics Data System (ADS)

    Chakraborty, Angana; Bandyopadhyay, Sanghamitra

    2013-04-01

    In this article we propose a Fast Optimal Global Sequence Alignment Algorithm, FOGSAA, which aligns a pair of nucleotide/protein sequences faster than any optimal global alignment method including the widely used Needleman-Wunsch (NW) algorithm. FOGSAA is applicable for all types of sequences, with any scoring scheme, and with or without affine gap penalty. Compared to NW, FOGSAA achieves a time gain of (70-90)% for highly similar nucleotide sequences (> 80% similarity), and (54-70)% for sequences having (30-80)% similarity. For other sequences, it terminates with an approximate score. For protein sequences, the average time gain is between (25-40)%. Compared to three heuristic global alignment methods, the quality of alignment is improved by about 23%-53%. FOGSAA is, in general, suitable for aligning any two sequences defined over a finite alphabet set, where the quality of the global alignment is of supreme importance.

  10. Fast computation algorithms for speckle pattern simulation

    SciTech Connect

    Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru

    2013-11-13

    We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.

  11. Sequential algorithm for fast clique percolation.

    PubMed

    Kumpula, Jussi M; Kivelä, Mikko; Kaski, Kimmo; Saramäki, Jari

    2008-08-01

    In complex network research clique percolation, introduced by Palla, Derényi, and Vicsek [Nature (London) 435, 814 (2005)], is a deterministic community detection method which allows for overlapping communities and is purely based on local topological properties of a network. Here we present a sequential clique percolation algorithm (SCP) to do fast community detection in weighted and unweighted networks, for cliques of a chosen size. This method is based on sequentially inserting the constituent links to the network and simultaneously keeping track of the emerging community structure. Unlike existing algorithms, the SCP method allows for detecting k -clique communities at multiple weight thresholds in a single run, and can simultaneously produce a dendrogram representation of hierarchical community structure. In sparse weighted networks, the SCP algorithm can also be used for implementing the weighted clique percolation method recently introduced by Farkas [New J. Phys. 9, 180 (2007)]. The computational time of the SCP algorithm scales linearly with the number of k -cliques in the network. As an example, the method is applied to a product association network, revealing its nested community structure.

  12. A fast DFT algorithm using complex integer transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    Winograd's algorithm for computing the discrete Fourier transform is extended considerably for certain large transform lengths. This is accomplished by performing the cyclic convolution, required by Winograd's method, by a fast transform over certain complex integer fields. This algorithm requires fewer multiplications than either the standard fast Fourier transform or Winograd's more conventional algorithms.

  13. Fast linear algorithms for machine learning

    NASA Astrophysics Data System (ADS)

    Lu, Yichao

    Nowadays linear methods like Regression, Principal Component Analysis and Canonical Correlation Analysis are well understood and widely used by the machine learning community for predictive modeling and feature generation. Generally speaking, all these methods aim at capturing interesting subspaces in the original high dimensional feature space. Due to the simple linear structures, these methods all have a closed form solution which makes computation and theoretical analysis very easy for small datasets. However, in modern machine learning problems it's very common for a dataset to have millions or billions of features and samples. In these cases, pursuing the closed form solution for these linear methods can be extremely slow since it requires multiplying two huge matrices and computing inverse, inverse square root, QR decomposition or Singular Value Decomposition (SVD) of huge matrices. In this thesis, we consider three fast algorithms for computing Regression and Canonical Correlation Analysis approximate for huge datasets.

  14. Fast algorithm for computing complex number-theoretic transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  15. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  16. Geometric Transforms for Fast Geometric Algorithms.

    DTIC Science & Technology

    1979-12-01

    approximation algorithm extends the ideas of the first by defining a transform based on a " pie -slice" diagram and Use Of the floor function. 8.1 .1...2. The second (-approxinate algorithm reduces the time from O(N/(()I/") to O(N + 1 /() by using a tr’nisformu hascd on a " pie -slic e" diagram (Figure...N + 1/).) Bentley, Weide, and Yao [18] have used a simple " pie -slice" diagram for their Voronoi diagram algorithm and Weide [09] has used the floor

  17. Cumulative Reconstructor: fast wavefront reconstruction algorithm for Extremely Large Telescopes.

    PubMed

    Rosensteiner, Matthias

    2011-10-01

    The Cumulative Reconstructor (CuRe) is a new direct reconstructor for an optical wavefront from Shack-Hartmann wavefront sensor measurements. In this paper, the algorithm is adapted to realistic telescope geometries and the transition from modified Hudgin to Fried geometry is discussed. After a discussion of the noise propagation, we analyze the complexity of the algorithm. Our numerical tests confirm that the algorithm is very fast and accurate and can therefore be used for adaptive optics systems of Extremely Large Telescopes.

  18. A fast algorithm for image defogging

    NASA Astrophysics Data System (ADS)

    Wang, Xingyu; Guo, Shuai; Wang, Hui; Su, Haibing

    2016-09-01

    For the low visibility and contrast of the foggy image, I propose a single image defogging algorithm. Firstly, change the foggy image from the space of RGB to HSI and divide it into a plurality of blocks. Secondly, elect the maximum point of S component of each block and correct it, keeping H component constant and adjusting I component, so we can estimate fog component through bilinear interpolation. Most importantly, the algorithm deals with the sky region individually. Finally, let the RGB values of all pixels in the blocks minus the fog component and adjust the brightness, so we can obtain the defogging image. Compared with the other algorithms, its efficiency is improved greatly and the image clarity is enhanced. At the same time, the scene is not limited and the scope of application is wide.

  19. FastDIRC: a fast Monte Carlo and reconstruction algorithm for DIRC detectors

    NASA Astrophysics Data System (ADS)

    Hardin, J.; Williams, M.

    2016-10-01

    FastDIRC is a novel fast Monte Carlo and reconstruction algorithm for DIRC detectors. A DIRC employs rectangular fused-silica bars both as Cherenkov radiators and as light guides. Cherenkov-photon imaging and time-of-propagation information are utilized by a DIRC to identify charged particles. GEANT4-based DIRC Monte Carlo simulations are extremely CPU intensive. The FastDIRC algorithm permits fully simulating a DIRC detector more than 10 000 times faster than using GEANT4. This facilitates designing a DIRC-reconstruction algorithm that improves the Cherenkov-angle resolution of a DIRC detector by ≈ 30% compared to existing algorithms. FastDIRC also greatly reduces the time required to study competing DIRC-detector designs.

  20. A fast algorithm for numerical solutions to Fortet's equation

    NASA Astrophysics Data System (ADS)

    Brumen, Gorazd

    2008-10-01

    A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.

  1. Computer program for fast Karhunen Loeve transform algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A. K.

    1976-01-01

    The fast KL transform algorithm was applied for data compression of a set of four ERTS multispectral images and its performance was compared with other techniques previously studied on the same image data. The performance criteria used here are mean square error and signal to noise ratio. The results obtained show a superior performance of the fast KL transform coding algorithm on the data set used with respect to the above stated perfomance criteria. A summary of the results is given in Chapter I and details of comparisons and discussion on conclusions are given in Chapter IV.

  2. Fast prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  3. Fast algorithm for relaxation processes in big-data systems

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Lee, D.-S.; Kahng, B.

    2014-10-01

    Relaxation processes driven by a Laplacian matrix can be found in many real-world big-data systems, for example, in search engines on the World Wide Web and the dynamic load-balancing protocols in mesh networks. To numerically implement such processes, a fast-running algorithm for the calculation of the pseudoinverse of the Laplacian matrix is essential. Here we propose an algorithm which computes quickly and efficiently the pseudoinverse of Markov chain generator matrices satisfying the detailed-balance condition, a general class of matrices including the Laplacian. The algorithm utilizes the renormalization of the Gaussian integral. In addition to its applicability to a wide range of problems, the algorithm outperforms other algorithms in its ability to compute within a manageable computing time arbitrary elements of the pseudoinverse of a matrix of size millions by millions. Therefore our algorithm can be used very widely in analyzing the relaxation processes occurring on large-scale networked systems.

  4. A fast directional algorithm for high-frequency electromagnetic scattering

    SciTech Connect

    Tsuji, Paul; Ying Lexing

    2011-06-20

    This paper is concerned with the fast solution of high-frequency electromagnetic scattering problems using the boundary integral formulation. We extend the O(N log N) directional multilevel algorithm previously proposed for the acoustic scattering case to the vector electromagnetic case. We also detail how to incorporate the curl operator of the magnetic field integral equation into the algorithm. When combined with a standard iterative method, this results in an almost linear complexity solver for the combined field integral equations. In addition, the butterfly algorithm is utilized to compute the far field pattern and radar cross section with O(N log N) complexity.

  5. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  6. MATLAB tensor classes for fast algorithm prototyping : source code.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.

  7. The New Algorithm for Fast Probabilistic Hypocenter Locations

    NASA Astrophysics Data System (ADS)

    Dębski, Wojciech; Klejment, Piotr

    2016-12-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. It is well recognised that there is no single universal location algorithm which performs equally well in all situations. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms. In this paper we propose a new location algorithm which exploits the reciprocity and time-inverse invariance property of the wave equation. Basing on these symmetries and using a modern finite-difference-type eikonal solver, we have developed a new very fast algorithm performing the full probabilistic (Bayesian) source location. We illustrate an efficiency of the algorithm performing an advanced error analysis for 1647 seismic events from the Rudna copper mine operating in southwestern Poland.

  8. Fast-convergence superpixel algorithm via an approximate optimization

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2016-09-01

    We propose an optimization scheme that achieves fast yet accurate computation of superpixels from an image. Our optimization is designed to improve the efficiency and robustness for the minimization of a composite energy functional in the expectation-minimization (EM) framework where we restrict the update of an estimate to avoid redundant computations. We consider a superpixel energy formulation that consists of L2-norm for the spatial regularity and L1-norm for the data fidelity in the demonstration of the robustness of the proposed algorithm. The quantitative and qualitative evaluations indicate that our superpixel algorithm outperforms SLIC and SEEDS algorithms. It is also demonstrated that our algorithm guarantees the convergence with less computational cost by up to 89% on average compared to the SLIC algorithm while preserving the accuracy. Our optimization scheme can be easily extended to other applications in which the alternating minimization is applicable in the EM framework.

  9. Feature Selection for Natural Language Call Routing Based on Self-Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Koromyslova, A.; Semenkina, M.; Sergienko, R.

    2017-02-01

    The text classification problem for natural language call routing was considered in the paper. Seven different term weighting methods were applied. As dimensionality reduction methods, the feature selection based on self-adaptive GA is considered. k-NN, linear SVM and ANN were used as classification algorithms. The tasks of the research are the following: perform research of text classification for natural language call routing with different term weighting methods and classification algorithms and investigate the feature selection method based on self-adaptive GA. The numerical results showed that the most effective term weighting is TRR. The most effective classification algorithm is ANN. Feature selection with self-adaptive GA provides improvement of classification effectiveness and significant dimensionality reduction with all term weighting methods and with all classification algorithms.

  10. A Matrix Computation View of the FastMap and RobustMap Dimension Reduction Algorithms

    SciTech Connect

    Ostrouchov, George

    2009-01-01

    Given a set of pairwise object distances and a dimension $k$, FastMap and RobustMap algorithms compute a set of $k$-dimensional coordinates for the objects. These metric space embedding methods implicitly assume a higher-dimensional coordinate representation and are a sequence of translations and orthogonal projections based on a sequence of object pair selections (called pivot pairs). We develop a matrix computation viewpoint of these algorithms that operates on the coordinate representation explicitly using Householder reflections. The resulting Coordinate Mapping Algorithm (CMA) is a fast approximate alternative to truncated principal component analysis (PCA) and it brings the FastMap and RobustMap algorithms into the mainstream of numerical computation where standard BLAS building blocks are used. Motivated by the geometric nature of the embedding methods, we further show that truncated PCA can be computed with CMA by specific pivot pair selections. Describing FastMap, RobustMap, and PCA as CMA computations with different pivot pair choices unifies the methods along a pivot pair selection spectrum. We also sketch connections to the semi-discrete decomposition and the QLP decomposition.

  11. Parallel Detection Algorithm for Fast Frequency Hopping OFDM

    NASA Astrophysics Data System (ADS)

    Kun, Xu; Xiao-xin, Yi

    2011-05-01

    Fast frequency hopping OFDM (FFH-OFDM) exploits frequency diversity in one OFDM symbol to enhance conventional OFDM performance without using channel coding. Zero-forcing (ZF) and minimum mean square error (MMSE) equalization were first used to detect FFH-OFDM signal with a relatively poor bit error rate (BER) performance compared to QR-based detection algorithm. This paper proposes a parallel detection algorithm (PDA) to further improve the BER performance with parallel interference cancelation (PIC) based on MMSE criterion. Our proposed PDA not only improves the BER performance at high signal to noise ratio (SNR) regime but also possesses lower decoding delay property with respect to QR-based detection algorithm while maintaining comparable computation complexity. Simulation results indicate that at BER = 10-3 the PDA achieves 5 dB SNR gain over QR-based detection algorithm and more as SNR increases.

  12. Fast algorithm for transient current through open quantum systems

    NASA Astrophysics Data System (ADS)

    Cheung, King Tai; Fu, Bin; Yu, Zhizhou; Wang, Jian

    2017-03-01

    Transient current calculation is essential to study the response time and capture the peak transient current for preventing meltdown of nanochips in nanoelectronics. Its calculation is known to be extremely time consuming with the best scaling T N3 where N is the dimension of the device and T is the number of time steps. The dynamical response of the system is usually probed by sending a steplike pulse and monitoring its transient behavior. Here, we provide a fast algorithm to study the transient behavior due to the steplike pulse. This algorithm consists of two parts: algorithm I reduces the computational complexity to T0N3 for large systems as long as T algorithm II employs the fast multipole technique and achieves scaling T0N3 whenever T algorithm allows us to tackle many large scale transient problems including magnetic tunneling junctions and ferroelectric tunneling junctions.

  13. A fast hidden line algorithm for plotting finite element models

    NASA Technical Reports Server (NTRS)

    Jones, G. K.

    1982-01-01

    Effective plotting of finite element models requires the use of fast hidden line plot techniques that provide interactive response. A high speed hidden line technique was developed to facilitate the plotting of NASTRAN finite element models. Based on testing using 14 different models, the new hidden line algorithm (JONES-D) appears to be very fast: its speed equals that for normal (all lines visible) plotting and when compared to other existing methods it appears to be substantially faster. It also appears to be very reliable: no plot errors were observed using the new method to plot NASTRAN models. The new algorithm was made part of the NPLOT NASTRAN plot package and was used by structural analysts for normal production tasks.

  14. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  15. SMG: Fast scalable greedy algorithm for influence maximization in social networks

    NASA Astrophysics Data System (ADS)

    Heidari, Mehdi; Asadpour, Masoud; Faili, Hesham

    2015-02-01

    Influence maximization is the problem of finding k most influential nodes in a social network. Many works have been done in two different categories, greedy approaches and heuristic approaches. The greedy approaches have better influence spread, but lower scalability on large networks. The heuristic approaches are scalable and fast but not for all type of networks. Improving the scalability of greedy approach is still an open and hot issue. In this work we present a fast greedy algorithm called State Machine Greedy that improves the existing algorithms by reducing calculations in two parts: (1) counting the traversing nodes in estimate propagation procedure, (2) Monte-Carlo graph construction in simulation of diffusion. The results show that our method makes a huge improvement in the speed over the existing greedy approaches.

  16. Visual gaze behavior of near-expert and expert fast pitch softball umpires calling a pitch.

    PubMed

    Millslagle, Duane G; Smith, Melissa S; Hines, Bridget B

    2013-05-01

    The purpose of this study was to examine the difference in visual gaze behavior between near expert (NE) and expert (E) umpires in a simulated pitch-hit situation in fast pitch softball. An Applied Science Laboratory mobile eye tracker was worn by 4 NE and 4 E fast pitch umpires and recorded their visual gaze behavior while following pitches (internal view). A digital camera located behind the pitcher recorded the external view of the pitcher, hitter, catcher, and umpire actions for each pitch. The internal and external video clips of 10 representative pitches--5 balls and 5 strikes--were synchronized and displayed in a split screen and were then coded for statistical analyses using Quiet eye solution software. Analysis of variance and multivariate analysis of variance statistical analyses of the umpires' gaze behavior during onset, duration, offset, and frequency (fixation/pursuit tracking, saccades, and blinks) were conducted between and within the 5 stages (pitcher's preparation, delivery and release, ball in flight, and umpire call) by umpire's skill level. Significant differences (p < 0.05) observed for combined gaze behavior frequency, type of gaze by phase, quiet eye duration and onset, and ball duration tracking indicated that E umpires' visual control was more stable and economical than NE umpires. Quiet eye significant results indicated that E umpires had an earlier onset (mean = 50.0 ± 13.9% vs. 56 ± 9.5%) and longer duration (mean = 15.1 ± 11.3% vs. 9.3 ± 6.5%) of the pitcher's release area than NE umpires. These findings suggest that gaze behavior of expert fast pitch umpires was more economical, fixated earlier and for a longer period of time on the area where the ball would be released, and was able to track the ball earlier and for a longer period of time.

  17. Fast algorithm for calculating chemical kinetics in turbulent reacting flow

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.; Pratt, D. T.

    1986-01-01

    This paper addresses the need for a fast batch chemistry solver to perform the kinetics part of a split operator formulation of turbulent reacting flows, with special attention focused on the solution of the ordinary differential equations governing a homogeneous gas-phase chemical reaction. For this purpose, a two-part predictor-corrector algorithm which incorporates an exponentially fitted trapezoidal method was developed. The algorithm performs filtering of ill-posed initial conditions, automatic step-size selection, and automatic selection of Jacobi-Newton or Newton-Raphson iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm, termed CREK1D (combustion reaction kinetics, one-dimensional), compared favorably with the code LSODE when tested on two representative problems drawn from combustion kinetics, and is faster than LSODE.

  18. A fast image encryption algorithm based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  19. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  20. A fast algorithm for sparse matrix computations related to inversion

    SciTech Connect

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round

  1. A fast marching algorithm for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Treister, Eran; Haber, Eldad

    2016-11-01

    The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. This inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss-Newton.

  2. A non-parametric peak calling algorithm for DamID-Seq.

    PubMed

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.

  3. A Comparison of Base-calling Algorithms for Illumina Sequencing Technology.

    PubMed

    Cacho, Ashley; Smirnova, Ekaterina; Huzurbazar, Snehalata; Cui, Xinping

    2016-09-01

    Recent advances in next-generation sequencing technology have yielded increasing cost-effectiveness and higher throughput produced per run, in turn, greatly influencing the analysis of DNA sequences. Among the various sequencing technologies, Illumina is by far the most widely used platform. However, the Illumina sequencing platform suffers from several imperfections that can be attributed to the chemical processes inherent to the sequencing-by-synthesis technology. With the enormous amounts of reads produced, statistical methodologies and computationally efficient algorithms are required to improve the accuracy and speed of base-calling. Over the past few years, several papers have proposed methods to model the various imperfections, giving rise to accurate and/or efficient base-calling algorithms. In this article, we provide a comprehensive comparison of the performance of recently developed base-callers and we present a general statistical model that unifies a large majority of these base-callers.

  4. A fast algorithm for reordering sparse matrices for parallel factorization

    SciTech Connect

    Lewis, J.G.; Peyton, B.W.; Pothen, A.

    1989-01-01

    Jess and Kees introduced a method for ordering a sparse symmetric matrix A for efficient parallel factorization. The parallel ordering is computed in two steps. First, the matrix A is ordered by some fill-reducing ordering. Second, a parallel ordering of A is computed from the filled graph that results from factoring A using the initial fill-reducing ordering. Among all orderings whose fill lies in the filled graph, this parallel ordering achieves the minimum number of parallel steps in the factorization of A. Jess and Kees did not specify the implementation details of an algorithm for either step of this scheme. Liu and Mirzaian (1987) designed an algorithm implementing the second step, but it has time and space requirements higher than the cost of computing common fill-reducing orderings. We present here a new fast algorithm that implements the parallel ordering step by exploiting the clique tree representation of a chordal graph. We succeed in reducing the cost of the parallel ordering step well below that of the fill-reducing step. Our algorithm has time and space complexity linear in the number of compressed subscripts of L, i.e., the sum of the sizes of the maximal cliques of the filled graph. Empirically we demonstrate running times nearly identical to Liu's heuristic Composite Rotations algorithm that approximates the minimum number of parallel steps. 21 refs., 3 figs., 4 tabs.

  5. A fast contour descriptor algorithm for supernova imageclassification

    SciTech Connect

    Aragon, Cecilia R.; Aragon, David Bradburn

    2006-07-16

    We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape-detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.

  6. Fast weighted K-view-voting algorithm for image texture classification

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Lan, Yihua; Wang, Qian; Jin, Renchao; Song, Enmin; Hung, Chih-Cheng

    2012-02-01

    We propose an innovative and efficient approach to improve K-view-template (K-view-T) and K-view-datagram (K-view-D) algorithms for image texture classification. The proposed approach, called the weighted K-view-voting algorithm (K-view-V), uses a novel voting method for texture classification and an accelerating method based on the efficient summed square image (SSI) scheme as well as fast Fourier transform (FFT) to enable overall faster processing. Decision making, which assigns a pixel to a texture class, occurs by using our weighted voting method among the ``promising'' members in the neighborhood of a classified pixel. In other words, this neighborhood consists of all the views, and each view has a classified pixel in its territory. Experimental results on benchmark images, which are randomly taken from Brodatz Gallery and natural and medical images, show that this new classification algorithm gives higher classification accuracy than existing K-view algorithms. In particular, it improves the accurate classification of pixels near the texture boundary. In addition, the proposed acceleration method improves the processing speed of K-view-V as it requires much less computation time than other K-view algorithms. Compared with the results of earlier developed K-view algorithms and the gray level co-occurrence matrix (GLCM), the proposed algorithm is more robust, faster, and more accurate.

  7. Multifrequency and multidirection optimizations of antenna arrays using heuristic algorithms and the multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Önol, Can; Alkış, Sena; Gökçe, Özer; Ergül, Özgür

    2016-07-01

    We consider fast and efficient optimizations of arrays involving three-dimensional antennas with arbitrary shapes and geometries. Heuristic algorithms, particularly genetic algorithms, are used for optimizations, while the required solutions are carried out accurately and efficiently via the multilevel fast multipole algorithm (MLFMA). The superposition principle is employed to reduce the number of MLFMA solutions to the number of array elements per frequency. The developed mechanism is used to optimize arrays for multifrequency and/or multidirection operations, i.e., to find the most suitable set of antenna excitations for desired radiation characteristics simultaneously at different frequencies and/or directions. The capabilities of the optimization environment are demonstrated on arrays of bowtie and Vivaldi antennas.

  8. Fast Outlier Detection Using a Grid-Based Algorithm.

    PubMed

    Lee, Jihwan; Cho, Nam-Wook

    2016-01-01

    As one of data mining techniques, outlier detection aims to discover outlying observations that deviate substantially from the reminder of the data. Recently, the Local Outlier Factor (LOF) algorithm has been successfully applied to outlier detection. However, due to the computational complexity of the LOF algorithm, its application to large data with high dimension has been limited. The aim of this paper is to propose grid-based algorithm that reduces the computation time required by the LOF algorithm to determine the k-nearest neighbors. The algorithm divides the data spaces in to a smaller number of regions, called as a "grid", and calculates the LOF value of each grid. To examine the effectiveness of the proposed method, several experiments incorporating different parameters were conducted. The proposed method demonstrated a significant computation time reduction with predictable and acceptable trade-off errors. Then, the proposed methodology was successfully applied to real database transaction logs of Korea Atomic Energy Research Institute. As a result, we show that for a very large dataset, the grid-LOF can be considered as an acceptable approximation for the original LOF. Moreover, it can also be effectively used for real-time outlier detection.

  9. Separated Representations and Fast Algorithms for Materials Science

    DTIC Science & Technology

    2007-10-29

    The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...separable functions, the so-called separated representations. Our approach is different from the Fast U 1 . REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE...5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER W911NF-06- 1 -0254 6D10S7 Form Approved OMB NO. 0704-0188 50966

  10. A Fast Algorithm for Exonic Regions Prediction in DNA Sequences

    PubMed Central

    Saberkari, Hamidreza; Shamsi, Mousa; Heravi, Hamed; Sedaaghi, Mohammad Hossein

    2013-01-01

    The main purpose of this paper is to introduce a fast method for gene prediction in DNA sequences based on the period-3 property in exons. First, the symbolic DNA sequences were converted to digital signal using the electron ion interaction potential method. Then, to reduce the effect of background noise in the period-3 spectrum, we used the discrete wavelet transform at three levels and applied it on the input digital signal. Finally, the Goertzel algorithm was used to extract period-3 components in the filtered DNA sequence. The proposed algorithm leads to decrease the computational complexity and hence, increases the speed of the process. Detection of small size exons in DNA sequences, exactly, is another advantage of the algorithm. The proposed algorithm ability in exon prediction was compared with several existing methods at the nucleotide level using: (i) specificity - sensitivity values; (ii) receiver operating curves (ROC); and (iii) area under ROC curve. Simulation results confirmed that the proposed method can be used as a promising tool for exon prediction in DNA sequences. PMID:24672762

  11. A fast algorithm for finding point sources in the Fermi data stream: FermiFAST

    NASA Astrophysics Data System (ADS)

    Asvathaman, Asha; Omand, Conor; Barton, Alistair; Heyl, Jeremy S.

    2017-04-01

    We present a new and efficient algorithm for finding point sources in the photon event data stream from the Fermi Gamma-Ray Space Telescope, FermiFAST. The key advantage of FermiFAST is that it constructs a catalogue of potential sources very fast by arranging the photon data in a hierarchical data structure. Using this structure, FermiFAST rapidly finds the photons that could have originated from a potential gamma-ray source. It calculates a likelihood ratio for the contribution of the potential source using the angular distribution of the photons within the region of interest. It can find within a few minutes the most significant half of the Fermi Third Point Source catalogue (3FGL) with nearly 80 per cent purity from the 4 yr of data used to construct the catalogue. If a higher purity sample is desirable, one can achieve a sample that includes the most significant third of the Fermi 3FGL with only 5 per cent of the sources unassociated with Fermi sources. Outside the Galactic plane, all but eight of the 580 FermiFAST detections are associated with 3FGL sources. And of these eight, six yield significant detections of greater than 5σ when a further binned likelihood analysis is performed. This software allows for rapid exploration of the Fermi data, simulation of the source detection to calculate the selection function of various sources and the errors in the obtained parameters of the sources detected.

  12. Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen

    2011-08-01

    Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.

  13. Fast Field Calibration of MIMU Based on the Powell Algorithm

    PubMed Central

    Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang

    2014-01-01

    The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801

  14. A fast sorting algorithm for a hypersonic rarefied flow particle simulation on the connection machine

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1989-01-01

    The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.

  15. Fast Dating Using Least-Squares Criteria and Algorithms.

    PubMed

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  16. Distress Calls of a Fast-Flying Bat (Molossus molossus) Provoke Inspection Flights but Not Cooperative Mobbing

    PubMed Central

    Carter, Gerald; Schoeppler, Diana; Manthey, Marie; Knörnschild, Mirjam; Denzinger, Annette

    2015-01-01

    Many birds and mammals produce distress calls when captured. Bats often approach speakers playing conspecific distress calls, which has led to the hypothesis that bat distress calls promote cooperative mobbing. An alternative explanation is that approaching bats are selfishly assessing predation risk. Previous playback studies on bat distress calls involved species with highly maneuverable flight, capable of making close passes and tight circles around speakers, which can look like mobbing. We broadcast distress calls recorded from the velvety free-tailed bat, Molossus molossus, a fast-flying aerial-hawker with relatively poor maneuverability. Based on their flight behavior, we predicted that, in response to distress call playbacks, M. molossus would make individual passing inspection flights but would not approach in groups or approach within a meter of the distress call source. By recording responses via ultrasonic recording and infrared video, we found that M. molossus, and to a lesser extent Saccopteryx bilineata, made more flight passes during distress call playbacks compared to noise. However, only the more maneuverable S. bilineata made close approaches to the speaker, and we found no evidence of mobbing in groups. Instead, our findings are consistent with the hypothesis that single bats approached distress calls simply to investigate the situation. These results suggest that approaches by bats to distress calls should not suffice as clear evidence for mobbing. PMID:26353118

  17. Distress Calls of a Fast-Flying Bat (Molossus molossus) Provoke Inspection Flights but Not Cooperative Mobbing.

    PubMed

    Carter, Gerald; Schoeppler, Diana; Manthey, Marie; Knörnschild, Mirjam; Denzinger, Annette

    2015-01-01

    Many birds and mammals produce distress calls when captured. Bats often approach speakers playing conspecific distress calls, which has led to the hypothesis that bat distress calls promote cooperative mobbing. An alternative explanation is that approaching bats are selfishly assessing predation risk. Previous playback studies on bat distress calls involved species with highly maneuverable flight, capable of making close passes and tight circles around speakers, which can look like mobbing. We broadcast distress calls recorded from the velvety free-tailed bat, Molossus molossus, a fast-flying aerial-hawker with relatively poor maneuverability. Based on their flight behavior, we predicted that, in response to distress call playbacks, M. molossus would make individual passing inspection flights but would not approach in groups or approach within a meter of the distress call source. By recording responses via ultrasonic recording and infrared video, we found that M. molossus, and to a lesser extent Saccopteryx bilineata, made more flight passes during distress call playbacks compared to noise. However, only the more maneuverable S. bilineata made close approaches to the speaker, and we found no evidence of mobbing in groups. Instead, our findings are consistent with the hypothesis that single bats approached distress calls simply to investigate the situation. These results suggest that approaches by bats to distress calls should not suffice as clear evidence for mobbing.

  18. Fast imaging system and algorithm for monitoring microlymphatics

    NASA Astrophysics Data System (ADS)

    Akl, T.; Rahbar, E.; Zawieja, D.; Gashev, A.; Moore, J.; Coté, G.

    2010-02-01

    The lymphatic system is not well understood and tools to quantify aspects of its behavior are needed. A technique to monitor lymph velocity that can lead to flow, the main determinant of transport, in a near real time manner can be extremely valuable. We recently built a new system that measures lymph velocity, vessel diameter and contractions using optical microscopy digital imaging with a high speed camera (500fps) and a complex processing algorithm. The processing time for a typical data period was significantly reduced to less than 3 minutes in comparison to our previous system in which readings were available 30 minutes after the vessels were imaged. The processing was based on a correlation algorithm in the frequency domain, which, along with new triggering methods, reduced the processing and acquisition time significantly. In addition, the use of a new data filtering technique allowed us to acquire results from recordings that were irresolvable by the previous algorithm due to their high noise level. The algorithm was tested by measuring velocities and diameter changes in rat mesenteric micro-lymphatics. We recorded velocities of 0.25mm/s on average in vessels of diameter ranging from 54um to 140um with phasic contraction strengths of about 6 to 40%. In the future, this system will be used to monitor acute effects that are too fast for previous systems and will also increase the statistical power when dealing with chronic changes. Furthermore, we plan on expanding its functionality to measure the propagation of the contractile activity.

  19. A fast heuristic algorithm for a probe mapping problem.

    PubMed

    Mumey, B

    1997-01-01

    A new heuristic algorithm is presented for mapping probes to locations along the genome, given noisy pairwise distance data as input. The model considered is quite general: The input consists of a collection of probe pairs and a confidence interval for the genomic distance separating each pair. Because the distance intervals are only known with some confidence level, some may be erroneous and must be removed in order to find a consistent map. A novel randomized technique for detecting and removing bad distance intervals is described. The technique could be useful in other contexts where partially erroneous data is inconsistent with the remaining data. These algorithms were motivated by the goal of making probe maps with inter-probe distance confidence intervals estimated from fluorescence in-situ hybridization (FISH) experiments. Experimentation was done on synthetic data sets (with and without errors) and FISH data from a region of human chromosome 4. Problems with up to 100 probes could be solved in several minutes on a fast workstation. In addition to FISH mapping, we describe some other possible applications that fall within the problem model. These include: mapping a backbone structure in folded DNA, finding consensus maps between independent maps covering the same genomic region, and ordering clones in a clone library.

  20. Fast pulse detection algorithms for digitized waveforms from scintillators

    NASA Astrophysics Data System (ADS)

    Krasilnikov, V.; Marocco, D.; Esposito, B.; Riva, M.; Kaschuck, Yu.

    2011-03-01

    Advanced C++ programming methods as well as fast Pulse Detection Algorithms (PDA) have been implemented in order to increase the computing speed of a LabVIEW™ data processing software developed for a Digital Pulse Shape Discrimination (DPSD) system for liquid scintillators. The newly implemented PDAs are described and compared: the most efficient method has been implemented in the data processing software, which has also been ported into C++. The comparison of the computing speeds of the new and old versions of the PDAs are presented. Program summaryProgram title: DPDS - Digital Pulse Detection Software Catalogue identifier: AEHQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 454 070 No. of bytes in distributed program, including test data, etc.: 20 987 104 Distribution format: tar.gz Programming language: C++ (Borland Visual C++) Computer: IBM PC Operating system: MS Windows 2000 and later… RAM: <50 Mbytes, highly depends on settings Classification: 4.12 External routines: Only standard Borland Visual C++ libraries Nature of problem: A very slow pulse detection algorithm, used as standard in LABView, is preventing the ability to process achieved data during the pause between plasma discharges in modern tokamaks. Solution method: Simple yet precise pulse detection algorithms implemented and the whole data processing software translated from LABView into C++. This speeded up the elaboration up to 30 times. Restrictions: Windows system decimal separator must be ".", not ",". Additional comments: Processing 300 MB data file should not take longer then 10 minutes. Running time: From 1 minute to 1 hour.

  1. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor

    PubMed Central

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-01-01

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University’s datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy. PMID:26287198

  2. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.

    PubMed

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-08-14

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.

  3. Base-Calling Algorithm with Vocabulary (BCV) Method for Analyzing Population Sequencing Chromatograms

    PubMed Central

    Fantin, Yuri S.; Neverov, Alexey D.; Favorov, Alexander V.; Alvarez-Figueroa, Maria V.; Braslavskaya, Svetlana I.; Gordukova, Maria A.; Karandashova, Inga V.; Kuleshov, Konstantin V.; Myznikova, Anna I.; Polishchuk, Maya S.; Reshetov, Denis A.; Voiciehovskaya, Yana A.; Mironov, Andrei A.; Chulanov, Vladimir P.

    2013-01-01

    Sanger sequencing is a common method of reading DNA sequences. It is less expensive than high-throughput methods, and it is appropriate for numerous applications including molecular diagnostics. However, sequencing mixtures of similar DNA of pathogens with this method is challenging. This is important because most clinical samples contain such mixtures, rather than pure single strains. The traditional solution is to sequence selected clones of PCR products, a complicated, time-consuming, and expensive procedure. Here, we propose the base-calling with vocabulary (BCV) method that computationally deciphers Sanger chromatograms obtained from mixed DNA samples. The inputs to the BCV algorithm are a chromatogram and a dictionary of sequences that are similar to those we expect to obtain. We apply the base-calling function on a test dataset of chromatograms without ambiguous positions, as well as one with 3–14% sequence degeneracy. Furthermore, we use BCV to assemble a consensus sequence for an HIV genome fragment in a sample containing a mixture of viral DNA variants and to determine the positions of the indels. Finally, we detect drug-resistant Mycobacterium tuberculosis strains carrying frameshift mutations mixed with wild-type bacteria in the pncA gene, and roughly characterize bacterial communities in clinical samples by direct 16S rRNA sequencing. PMID:23382983

  4. A new fast algorithm for computing a complex number: Theoretic transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix fast Fourier transformation (FFT) algorithm for computing transforms over GF(sq q), where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  5. A fast algorithm for estimating actions in triaxial potentials

    NASA Astrophysics Data System (ADS)

    Sanders, Jason L.; Binney, James

    2015-03-01

    We present an approach to approximating rapidly the actions in a general triaxial potential. The method is an extension of the axisymmetric approach presented by Binney, and operates by assuming that the true potential is locally sufficiently close to some Stäckel potential. The choice of Stäckel potential and associated ellipsoidal coordinates is tailored to each individual input phase-space point. We investigate the accuracy of the method when computing actions in a triaxial Navarro-Frenk-White potential. The speed of the algorithm comes at the expense of large errors in the actions, particularly for the box orbits. However, we show that the method can be used to recover the observables of triaxial systems from given distribution functions to sufficient accuracy for the Jeans equations to be satisfied. Consequently, such models could be used to build models of external galaxies as well as triaxial components of our own Galaxy. When more accurate actions are required, this procedure can be combined with torus mapping to produce a fast convergent scheme for action estimation.

  6. FastTagger: an efficient algorithm for genome-wide tag SNP selection using multi-marker linkage disequilibrium

    PubMed Central

    2010-01-01

    Background Human genome contains millions of common single nucleotide polymorphisms (SNPs) and these SNPs play an important role in understanding the association between genetic variations and human diseases. Many SNPs show correlated genotypes, or linkage disequilibrium (LD), thus it is not necessary to genotype all SNPs for association study. Many algorithms have been developed to find a small subset of SNPs called tag SNPs that are sufficient to infer all the other SNPs. Algorithms based on the r2 LD statistic have gained popularity because r2 is directly related to statistical power to detect disease associations. Most of existing r2 based algorithms use pairwise LD. Recent studies show that multi-marker LD can help further reduce the number of tag SNPs. However, existing tag SNP selection algorithms based on multi-marker LD are both time-consuming and memory-consuming. They cannot work on chromosomes containing more than 100 k SNPs using length-3 tagging rules. Results We propose an efficient algorithm called FastTagger to calculate multi-marker tagging rules and select tag SNPs based on multi-marker LD. FastTagger uses several techniques to reduce running time and memory consumption. Our experiment results show that FastTagger is several times faster than existing multi-marker based tag SNP selection algorithms, and it consumes much less memory at the same time. As a result, FastTagger can work on chromosomes containing more than 100 k SNPs using length-3 tagging rules. FastTagger also produces smaller sets of tag SNPs than existing multi-marker based algorithms, and the reduction ratio ranges from 3%-9% when length-3 tagging rules are used. The generated tagging rules can also be used for genotype imputation. We studied the prediction accuracy of individual rules, and the average accuracy is above 96% when r2 ≥ 0.9. Conclusions Generating multi-marker tagging rules is a computation intensive task, and it is the bottleneck of existing multi-marker based tag

  7. Fast single-pass alignment and variant calling using sequencing data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Sequencing research requires efficient computation. Few programs use already known information about DNA variants when aligning sequence data to the reference map. New program findmap.f90 reads the previous variant list before aligning sequence, calling variant alleles, and summing the allele counts...

  8. A fast and memory-sparing probabilistic selection algorithm for the GPU

    SciTech Connect

    Monroe, Laura M; Wendelberger, Joanne; Michalak, Sarah

    2010-09-29

    A fast and memory-sparing probabilistic top-N selection algorithm is implemented on the GPU. This probabilistic algorithm gives a deterministic result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces both the memory requirements and the average time required for the algorithm. This algorithm is well-suited to more general parallel processors with multiple layers of memory hierarchy. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be especially useful for processors having a limited amount of fast memory available.

  9. A fast look-up algorithm for detecting repetitive DNA sequences

    SciTech Connect

    Guan, X.; Uberbacher, E.C.

    1996-12-31

    We have presented a fast linear time algorithm for recognizing tandem repeats. Our algorithm is a one pass algorithm. No information about the periodicity of tandem repeats is needed. The use of the indices calculated from non-continuous and overlapping {kappa}-tuples allow tandem repeats with insertions and deletions to be recognized.

  10. FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory

    NASA Astrophysics Data System (ADS)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.

    2016-09-01

    We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theory and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.

  11. Nonuniform fast Fourier transform-based fast back-projection algorithm for stepped frequency continuous wave ground penetrating radar imaging

    NASA Astrophysics Data System (ADS)

    Qu, Lele; Yin, Yuqing

    2016-10-01

    Stepped frequency continuous wave ground penetrating radar (SFCW-GPR) systems are becoming increasingly popular in the GPR community due to the wider dynamic range and higher immunity to radio interference. The traditional back-projection (BP) algorithm is preferable for SFCW-GPR imaging in layered mediums scenarios for its convenience and robustness. However, the existing BP imaging algorithms are usually very computationally intensive, which limits their practical applications to SFCW-GPR imaging. To solve the above problem, a fast SFCW-GPR BP imaging algorithm based on nonuniform fast Fourier transform (NUFFT) technique is proposed in this paper. By reformulating the traditional BP imaging algorithm into the evaluations of NUFFT, the computational efficiency of NUFFT is exploited to reduce the computational complexity of the imaging reconstruction. Both simulation and experimental results have verified the effectiveness and improvement of computational efficiency of the proposed imaging method.

  12. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  13. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm.

    PubMed

    Hesford, Andrew J; Chew, Weng C

    2010-08-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths.

  14. metilene: fast and sensitive calling of differentially methylated regions from bisulfite sequencing data

    PubMed Central

    Jühling, Frank; Kretzmer, Helene; Bernhart, Stephan H.; Otto, Christian; Stadler, Peter F.; Hoffmann, Steve

    2016-01-01

    The detection of differentially methylated regions (DMRs) is a necessary prerequisite for characterizing different epigenetic states. We present a novel program, metilene, to identify DMRs within whole-genome and targeted data with unrivaled specificity and sensitivity. A binary segmentation algorithm combined with a two-dimensional statistical test allows the detection of DMRs in large methylation experiments with multiple groups of samples in minutes rather than days using off-the-shelf hardware. metilene outperforms other state-of-the-art tools for low coverage data and can estimate missing data. Hence, metilene is a versatile tool to study the effect of epigenetic modifications in differentiation/development, tumorigenesis, and systems biology on a global, genome-wide level. Whether in the framework of international consortia with dozens of samples per group, or even without biological replicates, it produces highly significant and reliable results. PMID:26631489

  15. Fast Fourier transform for Voigt profile: Comparison with some other algorithms

    NASA Astrophysics Data System (ADS)

    Abousahl, S.; Gourma, M.; Bickel, M.

    1997-02-01

    There are different algorithms describing the Voigt profile. This profile is encountered in many areas of physics which could be limited by the resolution of the instrumentation used to measure it and by other phenomena like the interaction between the emitted waves and matter. In nuclear measurement field, the codes used to characterise radionucleides rely on algorithms resolving the Voigt profile equation. The Fast Fourier Transform (FFT) algorithm allows the validation of some algorithms.

  16. Fast algorithm for probabilistic bone edge detection (FAPBED)

    NASA Astrophysics Data System (ADS)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean

  17. [A fast non-local means algorithm for denoising of computed tomography images].

    PubMed

    Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong

    2012-11-01

    A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.

  18. A fast algorithm for functional mapping of complex traits.

    PubMed Central

    Zhao, Wei; Wu, Rongling; Ma, Chang-Xing; Casella, George

    2004-01-01

    By integrating the underlying developmental mechanisms for the phenotypic formation of traits into a mapping framework, functional mapping has emerged as an important statistical approach for mapping complex traits. In this note, we explore the feasibility of using the simplex algorithm as an alternative to solve the mixture-based likelihood for functional mapping of complex traits. The results from the simplex algorithm are consistent with those from the traditional EM algorithm, but the simplex algorithm has considerably reduced computational times. Moreover, because of its nonderivative nature and easy implementation with current software, the simplex algorithm enjoys an advantage over the EM algorithm in the dynamic modeling and analysis of complex traits. PMID:15342547

  19. Fast multiresolution search algorithm for optimal retrieval in large multimedia databases

    NASA Astrophysics Data System (ADS)

    Song, Byung C.; Kim, Myung J.; Ra, Jong Beom

    1999-12-01

    Most of the content-based image retrieval systems require a distance computation for each candidate image in the database. As a brute-force approach, the exhaustive search can be employed for this computation. However, this exhaustive search is time-consuming and limits the usefulness of such systems. Thus, there is a growing demand for a fast algorithm which provides the same retrieval results as the exhaustive search. In this paper, we prose a fast search algorithm based on a multi-resolution data structure. The proposed algorithm computes the lower bound of distance at each level and compares it with the latest minimum distance, starting from the low-resolution level. Once it is larger than the latest minimum distance, we can exclude the candidates without calculating the full- resolution distance. By doing this, we can dramatically reduce the total computational complexity. It is noticeable that the proposed fast algorithm provides not only the same retrieval results as the exhaustive search, but also a faster searching ability than existing fast algorithms. For additional performance improvement, we can easily combine the proposed algorithm with existing tree-based algorithms. The algorithm can also be used for the fast matching of various features such as luminance histograms, edge images, and local binary partition textures.

  20. Character-embedded watermarking algorithm using the fast Hadamard transform for satellite images

    NASA Astrophysics Data System (ADS)

    Ho, Anthony T. S.; Shen, Jun; Tan, Soon H.

    2003-01-01

    In this paper, a character-embedded watermarking algorithm is proposed for copyright protection of satellite images based on the Fast Hadamard transform (FHT). By using a private-key watermarking scheme, the watermark can be retrieved without using the original image. To increase the invisibility of the watermark, a visual model based on original image characteristics, such as edges and textures are incorporated to determine the watermarking strength factor. This factor determines the strength of watermark bits embedded according to the region complexity of the image. Detailed or coarse areas will be assigned more strength and smooth areas with less strength. Error correction coding is also used to increase the reliability of the information bits. A post-processing technique based on log-polar mapping is incorporated to enhance the robustness against geometric distortion attacks. Experiments showed that the proposed watermarking scheme was able to survive more than 70% of attacks from a common benchmarking tool called Stirmark, and about 90% against Checkmark non-geometric attacks. These attacks were performed on a number of SPOT images of size 512×512×8bit embedded with 32 characters. The proposed FHT algorithm also has the advantage of easy software and hardware implementation as well as speed, comparing to other orthogonal transforms such as Cosine, Fourier and wavelet transform.

  1. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  2. Fast Nonparametric Machine Learning Algorithms for High-Dimensional Massive Data and Applications

    DTIC Science & Technology

    2006-03-01

    Mapreduce : Simplified data processing on large clusters . In Symposium on Operating System Design and Implementation, 2004. 6.3.2 S. C. Deerwester, S. T...Fast Nonparametric Machine Learning Algorithms for High-dimensional Massive Data and Applications Ting Liu CMU-CS-06-124 March 2006 School of...4. TITLE AND SUBTITLE Fast Nonparametric Machine Learning Algorithms for High-dimensional Massive Data and Applications 5a. CONTRACT NUMBER 5b

  3. A fast quantum algorithm for the affine Boolean function identification

    NASA Astrophysics Data System (ADS)

    Younes, Ahmed

    2015-02-01

    Bernstein-Vazirani algorithm (the one-query algorithm) can identify a completely specified linear Boolean function using a single query to the oracle with certainty. The first aim of the paper is to show that if the provided Boolean function is affine, then one more query to the oracle (the two-query algorithm) is required to identify the affinity of the function with certainty. The second aim of the paper is to show that if the provided Boolean function is incompletely defined, then the one-query and the two-query algorithms can be used as bounded-error quantum polynomial algorithms to identify certain classes of incompletely defined linear and affine Boolean functions respectively with probability of success at least 2/3.

  4. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2013-01-01

    A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495

  5. A fast Fourier transform on multipoles (FFTM) algorithm for solving Helmholtz equation in acoustics analysis.

    PubMed

    Ong, Eng Teo; Lee, Heow Pueh; Lim, Kian Meng

    2004-09-01

    This article presents a fast algorithm for the efficient solution of the Helmholtz equation. The method is based on the translation theory of the multipole expansions. Here, the speedup comes from the convolution nature of the translation operators, which can be evaluated rapidly using fast Fourier transform algorithms. Also, the computations of the translation operators are accelerated by using the recursive formulas developed recently by Gumerov and Duraiswami [SIAM J. Sci. Comput. 25, 1344-1381(2003)]. It is demonstrated that the algorithm can produce good accuracy with a relatively low order of expansion. Efficiency analyses of the algorithm reveal that it has computational complexities of O(Na), where a ranges from 1.05 to 1.24. However, this method requires substantially more memory to store the translation operators as compared to the fast multipole method. Hence, despite its simplicity in implementation, this memory requirement issue may limit the application of this algorithm to solving very large-scale problems.

  6. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  7. A very fast iterative algorithm for TV-regularized image reconstruction with applications to low-dose and few-view CT

    NASA Astrophysics Data System (ADS)

    Kudo, Hiroyuki; Yamazaki, Fukashi; Nemoto, Takuya; Takaki, Keita

    2016-10-01

    This paper concerns iterative reconstruction for low-dose and few-view CT by minimizing a data-fidelity term regularized with the Total Variation (TV) penalty. We propose a very fast iterative algorithm to solve this problem. The algorithm derivation is outlined as follows. First, the original minimization problem is reformulated into the saddle point (primal-dual) problem by using the Lagrangian duality, to which we apply the first-order primal-dual iterative methods. Second, we precondition the iteration formula using the ramp filter of Filtered Backprojection (FBP) reconstruction algorithm in such a way that the problem solution is not altered. The resulting algorithm resembles the structure of so-called iterative FBP algorithm, and it converges to the exact minimizer of cost function very fast.

  8. Development of Fast Algorithms Using Recursion, Nesting and Iterations for Computational Electromagnetics

    NASA Technical Reports Server (NTRS)

    Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.

    1995-01-01

    In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.

  9. Vectorized Rebinning Algorithm for Fast Data Down-Sampling

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Jeffrey

    2013-01-01

    A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.

  10. Fast algorithm for automatically computing Strahler stream order

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  11. [Fast volume rendering of echocardiogram with shear-warp algorithm].

    PubMed

    Yang, Liu; Wang, Tianfu; Lin, Jiangli; Li, Deyu; Zheng, Yi; Zheng, Changqiong; Song, Haibo; Tang, Hong; Wang, Xiaoyi

    2004-04-01

    Shear-warp is a volume rendering technology based on object-order. It has the characteristics of high speed and high image quality by comparison with the conventional visualization technology. The authors introduced the principle of this algorithm and applied it to the visualization of 3-D data obtained by interpolating rotary scanning echocardiogram. The 3-D reconstruction of the echocardiogram was efficiently completed with high image quality. This algorithm has a prospective application in medical image visualization.

  12. Fast parallel algorithms for short-range molecular dynamics

    SciTech Connect

    Plimpton, S.

    1993-05-01

    Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a subset of atoms; the second assigns each a subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently -- those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 10,000,000 atoms on three parallel supercomputers, the nCUBE 2, Intel iPSC/860, and Intel Delta. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and the Intel Delta performs about 30 times faster than a single Y-MP processor and 12 times faster than a single C90 processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

  13. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  14. Fast computation algorithm for the Rayleigh-Sommerfeld diffraction formula using a type of scaled convolution.

    PubMed

    Nascov, Victor; Logofătu, Petre Cătălin

    2009-08-01

    We describe a fast computational algorithm able to evaluate the Rayleigh-Sommerfeld diffraction formula, based on a special formulation of the convolution theorem and the fast Fourier transform. What is new in our approach compared to other algorithms is the use of a more general type of convolution with a scale parameter, which allows for independent sampling intervals in the input and output computation windows. Comparison between the calculations made using our algorithm and direct numeric integration show a very good agreement, while the computation speed is increased by orders of magnitude.

  15. Outline of a fast hardware implementation of Winograd's DFT algorithm

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1980-01-01

    The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.

  16. Fast parallel algorithms: from images to level sets and labels

    NASA Astrophysics Data System (ADS)

    Nguyen, H. T.; Jung, Ken K.; Raghavan, Raghu

    1990-07-01

    Decomposition into level sets refers to assigning a code with respect to intensity or elevation while labeling refers to assigning a code with respect to disconnected regions. We present a sequence of parallel algorithms for these two processes. The process of labeling includes re-assign labels into a natural sequence and compare different labeling algorithm. We discuss the difference between edge-based and region-based labeling. The speed improvements in this labeling scheme come from the collective efficiency of different techniques. We have implemented these algorithms on an in-house built Geometric Single Instruction Multiple Data (GSIMD) parallel machine with global buses and a Multiple Instruction Multiple Data (MIMD) controller. This allows real time image interpretation on live data at a rate that is much higher than video rate. The performance figures will be shown.

  17. Gradient maintenance: A new algorithm for fast online replanning

    SciTech Connect

    Ahunbay, Ergun E. Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by

  18. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  19. Study of hardware implementations of fast tracking algorithms

    NASA Astrophysics Data System (ADS)

    Song, Z.; De Lentdecker, G.; Dong, J.; Huang, G.; Léonard, A.; Robert, F.; Wang, D.; Yang, Y.

    2017-02-01

    Real-time track reconstruction at high event rates is a major challenge for future experiments in high energy physics. To perform pattern-recognition and track fitting, artificial retina or Hough transformation methods have been introduced in the field which have to be implemented in FPGA firmware. In this note we report on a case study of a possible FPGA hardware implementation approach of the retina algorithm based on a Floating-Point core. Detailed measurements with this algorithm are investigated. Retina performance and capabilities of the FPGA are discussed along with perspectives for further optimization and applications.

  20. A Simple and Fast Spline Filtering Algorithm for Surface Metrology.

    PubMed

    Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei

    2015-01-01

    Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement.

  1. Performance Evaluation of Algorithms in Lung IMRT: A comparison of Monte Carlo, Pencil Beam, Superposition, Fast Superposition and Convolution Algorithms

    PubMed Central

    Verma, T.; Painuly, N.K.; Mishra, S.P.; Shajahan, M.; Singh, N.; Bhatt, M.L.B.; Jamal, N.; Pant, M.C.

    2016-01-01

    Background: Inclusion of inhomogeneity corrections in intensity modulated small fields always makes conformal irradiation of lung tumor very complicated in accurate dose delivery. Objective: In the present study, the performance of five algorithms via Monte Carlo, Pencil Beam, Convolution, Fast Superposition and Superposition were evaluated in lung cancer Intensity Modulated Radiotherapy planning. Materials and Methods: Treatment plans for ten lung cancer patients previously planned on Monte Carlo algorithm were re-planned using same treatment planning indices (gantry angel, rank, power etc.) in other four algorithms. Results: The values of radiotherapy planning parameters such as Mean dose, volume of 95% isodose line, Conformity Index, Homogeneity Index for target, Maximum dose, Mean dose; %Volume receiving 20Gy or more by contralateral lung; % volume receiving 30 Gy or more; % volume receiving 25 Gy or more, Mean dose received by heart; %volume receiving 35Gy or more; %volume receiving 50Gy or more, Mean dose to Easophagous; % Volume receiving 45Gy or more, Maximum dose received by Spinal cord and Total monitor unit, Volume of 50 % isodose lines were recorded for all ten patients. Performance of different algorithms was also evaluated statistically. Conclusion: MC and PB algorithms found better as for tumor coverage, dose distribution homogeneity in Planning Target Volume and minimal dose to organ at risks are concerned. Superposition algorithms found to be better than convolution and fast superposition. In the case of tumors located centrally, it is recommended to use Monte Carlo algorithms for the optimal use of radiotherapy. PMID:27853720

  2. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  3. Attitude determination using vector observations - A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. L.

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  4. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  5. An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.

    PubMed

    Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan

    2015-11-01

    The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed.

  6. Hierarchical data visualization using a fast rectangle-packing algorithm.

    PubMed

    Itoh, Takayuki; Yamaguchi, Yumi; Ikehata, Yuko; Kajinaga, Yasumasa

    2004-01-01

    This paper presents a technique for the representation of large-scale hierarchical data which aims to provide good overviews of complete structures and the content of the data in one display space. The technique represents the data by using nested rectangles. It first packs icons or thumbnails of the lowest-level data and then generates rectangular borders that enclose the packed data. It repeats the process of generating rectangles that enclose the lower-level rectangles until the highest-level rectangles are packed. This paper presents two rectangle-packing algorithms for placing items of hierarchical data onto display spaces. The algorithms refer to Delaunay triangular meshes connecting the centers of rectangles to find gaps where rectangles can be placed. The first algorithm places rectangles where they do not overlap each other and where the extension of the layout area is minimal. The second algorithm places rectangles by referring to templates describing the ideal positions for nodes of input data. It places rectangles where they do not overlap each other and where the combination of the layout area and the distances between the positions described in the template and the actual positions is minimal. It can smoothly represent time-varying data by referring to templates that describe previous layout results. It is also suitable for semantics-based or design-based data layout by generating templates according to the semantics or design.

  7. Fast Multiscale Algorithms for Information Representation and Fusion

    DTIC Science & Technology

    2012-07-01

    4 5.1 Experiment: LIDAR Dataset (MSVD using nearest neighbors...implementation of the new multiscale SVD (MSVD) algorithms. We applied the MSVD to a publicly available LIDAR dataset for the purposes of distinguishing...between vegetation and the forest floor. The final results are presented in this report (initial results were reported in the previous quarterly report

  8. An application of fast algorithms to numerical electromagnetic modeling

    SciTech Connect

    Bezvoda, V.; Segeth, K.

    1987-03-01

    Numerical electromagnetic modeling by the finite-difference or finite-element methods leads to a large sparse system of linear algebraic equations. Fast direct methods, requiring an order of at most q log q arithmetic operations to solve a system of q equations, cannot easily be applied to such a system. This paper describes the iterative application of a fast method, namely cyclic reduction, to the numerical solution of the Helmholtz equation with a piecewise constant imaginary coefficient of the absolute term in a plane domain. By means of numerical tests the advantages and limitations of the method compared with classical direct methods are discussed. The iterative application of the cyclic reduction method is very efficient if one can exploit a known solution of a similar (e.g., simpler) problem as the initial approximation. This makes cyclic reduction a powerful tool in solving the inverse problem by trial-and-error.

  9. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  10. A fast MPP algorithm for Ising spin exchange simulations

    NASA Technical Reports Server (NTRS)

    Sullivan, Francis; Mountain, Raymond D.

    1987-01-01

    A very efficient massively parallel processor (MPP) algorithm is described for performing one important class of Ising spin simulations. Results and physical significance of MPP calculations using the method described is discussed elsewhere. A few comments, however, are made on the problem under study and results so far are reported. Ted Einstein provided guidance in interpreting the initial results and in suggesting calculations to perform.

  11. Fast automatic algorithm for bifurcation detection in vascular CTA scans

    NASA Astrophysics Data System (ADS)

    Brozio, Matthias; Gorbunova, Vladlena; Godenschwager, Christian; Beck, Thomas; Bernhardt, Dominik

    2012-02-01

    Endovascular imaging aims at identifying vessels and their branches. Automatic vessel segmentation and bifurcation detection eases both clinical research and routine work. In this article a state of the art bifurcation detection algorithm is developed and applied on vascular computed tomography angiography (CTA) scans to mark the common iliac artery and its branches, the internal and external iliacs. In contrast to other methods our algorithm does not rely on a complete segmentation of a vessel in the 3D volume, but evaluates the cross-sections of the vessel slice by slice. Candidates for vessels are obtained by thresholding, following by 2D connected component labeling and prefiltering by size and position. The remaining candidates are connected in a squared distanced weighted graph. With Dijkstra algorithm the graph is traversed to get candidates for the arteries. We use another set of features considering length and shape of the paths to determine the best candidate and detect the bifurcation. The method was tested on 119 datasets acquired with different CT scanners and varying protocols. Both easy to evaluate datasets with high resolution and no apparent clinical diseases and difficult ones with low resolution, major calcifications, stents or poor contrast between the vessel and surrounding tissue were included. The presented results are promising, in 75.7% of the cases the bifurcation was labeled correctly, and in 82.7% the common artery and one of its branches were assigned correctly. The computation time was on average 0.49 s +/- 0.28 s, close to human interaction time, which makes the algorithm applicable for time-critical applications.

  12. Fast motion prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  13. Calculation of Computational Complexity for Radix-2 (p) Fast Fourier Transform Algorithms for Medical Signals.

    PubMed

    Amirfattahi, Rassoul

    2013-10-01

    Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.

  14. Fast time-reversible algorithms for molecular dynamics of rigid-body systems

    NASA Astrophysics Data System (ADS)

    Kajima, Yasuhiro; Hiyama, Miyabi; Ogata, Shuji; Kobayashi, Ryo; Tamura, Tomoyuki

    2012-06-01

    In this paper, we present time-reversible simulation algorithms for rigid bodies in the quaternion representation. By advancing a time-reversible algorithm [Y. Kajima, M. Hiyama, S. Ogata, and T. Tamura, J. Phys. Soc. Jpn. 80, 114002 (2011), 10.1143/JPSJ.80.114002] that requires iterations in calculating the angular velocity at each time step, we propose two kinds of iteration-free fast time-reversible algorithms. They are easily implemented in codes. The codes are compared with that of existing algorithms through demonstrative simulation of a nanometer-sized water droplet to find their stability of the total energy and computation speeds.

  15. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  16. Fast Huffman encoding algorithms in MPEG-4 advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2014-11-01

    This paper addresses the optimisation problem of Huffman encoding in MPEG-4 Advanced Audio Coding stan- dard. At first, the Huffman encoding problem and the need of encoding two side info parameters scale factor and Huffman codebook are presented. Next, Two Loop Search, Maximum Noise Mask Ratio and Trellis Based algorithms of bit allocation are briefly described. Further, Huffman encoding optimisation are shown. New methods try to check and change scale factor bands as little as possible to estimate bitrate cost or its change. Finally, the complexity of old and new methods is calculated, compared and measured time of encoding is given.

  17. Fast algorithm for calculating optical binary amplitude filters

    NASA Astrophysics Data System (ADS)

    Knopp, Jerome; Matalgah, Mustafa M.

    1995-08-01

    A new geometric viewpoint is presented for optimizing a binary amplitude filter based on finding an ordered set of phasors, the uncoiled phasor set (UPS), from the filter object's discrete Fourier transform that determines a convex polygon. The maximum distance across the polygon determines the value of the correlation peak and the set of frequencies that the optimal filter should pass. Algorithms are presented for finding the UPS and the maximum distance across the polygon that are competititve with optimization approaches that use the binning (Farn and Goodman). The new viewpoint provides a simple way to establish a bound on binning error.

  18. Review of alignment and SNP calling algorithms for next-generation sequencing data.

    PubMed

    Mielczarek, M; Szyda, J

    2016-02-01

    Application of the massive parallel sequencing technology has become one of the most important issues in life sciences. Therefore, it was crucial to develop bioinformatics tools for next-generation sequencing (NGS) data processing. Currently, two of the most significant tasks include alignment to a reference genome and detection of single nucleotide polymorphisms (SNPs). In many types of genomic analyses, great numbers of reads need to be mapped to the reference genome; therefore, selection of the aligner is an essential step in NGS pipelines. Two main algorithms-suffix tries and hash tables-have been introduced for this purpose. Suffix array-based aligners are memory-efficient and work faster than hash-based aligners, but they are less accurate. In contrast, hash table algorithms tend to be slower, but more sensitive. SNP and genotype callers may also be divided into two main different approaches: heuristic and probabilistic methods. A variety of software has been subsequently developed over the past several years. In this paper, we briefly review the current development of NGS data processing algorithms and present the available software.

  19. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  20. A fast algorithm for the simulation of arterial pulse waves

    NASA Astrophysics Data System (ADS)

    Du, Tao; Hu, Dan; Cai, David

    2016-06-01

    One-dimensional models have been widely used in studies of the propagation of blood pulse waves in large arterial trees. Under a periodic driving of the heartbeat, traditional numerical methods, such as the Lax-Wendroff method, are employed to obtain asymptotic periodic solutions at large times. However, these methods are severely constrained by the CFL condition due to large pulse wave speed. In this work, we develop a new numerical algorithm to overcome this constraint. First, we reformulate the model system of pulse wave propagation using a set of Riemann variables and derive a new form of boundary conditions at the inlet, the outlets, and the bifurcation points of the arterial tree. The new form of the boundary conditions enables us to design a convergent iterative method to enforce the boundary conditions. Then, after exchanging the spatial and temporal coordinates of the model system, we apply the Lax-Wendroff method in the exchanged coordinate system, which turns the large pulse wave speed from a liability to a benefit, to solve the wave equation in each artery of the model arterial system. Our numerical studies show that our new algorithm is stable and can perform ∼15 times faster than the traditional implementation of the Lax-Wendroff method under the requirement that the relative numerical error of blood pressure be smaller than one percent, which is much smaller than the modeling error.

  1. Fast algorithms for glassy materials: methods and explorations

    NASA Astrophysics Data System (ADS)

    Middleton, A. Alan

    2014-03-01

    Glassy materials with frozen disorder, including random magnets such as spin glasses and interfaces in disordered materials, exhibit striking non-equilibrium behavior such as the ability to store a history of external parameters (memory). Precisely due to their glassy nature, direct simulation of models of these materials is very slow. In some fortunate cases, however, algorithms exist that exactly compute thermodynamic quantities. Such cases include spin glasses in two dimensions and interfaces and random field magnets in arbitrary dimensions at zero temperature. Using algorithms built using ideas developed by computer scientists and mathematicians, one can even directly sample equilibrium configurations in very large systems, as if one picked the configurations out of a ``hat'' of all configurations weighted by their Boltzmann factors. This talk will provide some of the background for these methods and discuss the connections between physics and computer science, as used by a number of groups. Recent applications of these methods to investigating phase transitions in glassy materials and to answering qualitative questions about the free energy landscape and memory effects will be discussed. This work was supported in part by NSF grant DMR-1006731. Creighton Thomas and David Huse also contributed to much of the work to be presented.

  2. Fast-SNP: a fast matrix pre-processing algorithm for efficient loopless flux optimization of metabolic models

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155

  3. A fast double template convolution isocenter evaluation algorithm with subpixel accuracy

    SciTech Connect

    Winey, Brian; Sharp, Greg; Bussiere, Marc

    2011-01-15

    Purpose: To design a fast Winston Lutz (fWL) algorithm for accurate analysis of radiation isocenter from images without edge detection or center of mass calculations. Methods: An algorithm has been developed to implement the Winston Lutz test for mechanical/radiation isocenter agreement using an electronic portal imaging device (EPID). The algorithm detects the position of the radiation shadow of a tungsten ball within a stereotactic cone. The fWL algorithm employs a double convolution to independently find the position of the sphere and cone centers. Subpixel estimation is used to achieve high accuracy. Results of the algorithm were compared to (1) a human observer with template guidance and (2) an edge detection/center of mass (edCOM) algorithm. Testing was performed with high resolution (0.05mm/px, film) and low resolution (0.78mm/px, EPID) image sets. Results: Sphere and cone center relative positions were calculated with the fWL algorithm for high resolution test images with an accuracy of 0.002{+-}0.061 mm compared to 0.042{+-}0.294 mm for the human observer, and 0.003{+-}0.038 mm for the edCOM algorithm. The fWL algorithm required 0.01 s per image compared to 5 s for the edCOM algorithm and 20 s for the human observer. For lower resolution images the fWL algorithm localized the centers with an accuracy of 0.083{+-}0.12 mm compared to 0.03{+-}0.5514 mm for the edCOM algorithm. Conclusions: A fast (subsecond) subpixel algorithm has been developed that can accurately determine the center locations of the ball and cone in Winston Lutz test images without edge detection or COM calculations.

  4. Fast and parallel spectral transform algorithms for global shallow water models. Doctoral thesis

    SciTech Connect

    Jakob, R.

    1993-01-01

    The dissertation examines spectral transform algorithms for the solution of the shallow water equations on the sphere and studies their implementation and performance on shared memory vector multiprocessors. Beginning with the standard spectral transform algorithm in vorticity divergence form and its implementation in the Fortran based parallel programming language Force, two modifications are researched. First, the transforms and matrices associated with the meridional derivatives of the associated Legendre functions are replaced by corresponding operations with the spherical harmonic coefficients. Second, based on the fast Fourier transform and the fast multipole method, a lower complexity algorithm is derived that uses fast transformations between Legendre and interior Fourier nodes, fast surface spherical truncation and a fast spherical Helmholz solver. Because the global shallow water equations are similar to the horizontal dynamical component of general circulation models, the results can be applied to spectral transform numerical weather prediction and climate models. In general, the derived algorithms may speed up the solution of time dependent partial differential equations in spherical geometry.

  5. Fast algorithms for numerical, conservative, and entropy approximations of the Fokker-Planck-Landau equation

    SciTech Connect

    Buet, C.; Cordier; Degond, P.; Lemou, M.

    1997-05-15

    We present fast numerical algorithms to solve the nonlinear Fokker-Planck-Landau equation in 3D velocity space. The discretization of the collision operator preserves the properties required by the physical nature of the Fokker-Planck-Landau equation, such as the conservation of mass, momentum, and energy, the decay of the entropy, and the fact that the steady states are Maxwellians. At the end of this paper, we give numerical results illustrating the efficiency of these fast algorithms in terms of accuracy and CPU time. 20 refs., 7 figs.

  6. Fast Parabola Detection Using Estimation of Distribution Algorithms

    PubMed Central

    Sierra-Hernandez, Juan Manuel; Avila-Garcia, Maria Susana; Rojas-Laguna, Roberto

    2017-01-01

    This paper presents a new method based on Estimation of Distribution Algorithms (EDAs) to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications. PMID:28321264

  7. Fast Parabola Detection Using Estimation of Distribution Algorithms.

    PubMed

    Guerrero-Turrubiates, Jose de Jesus; Cruz-Aceves, Ivan; Ledesma, Sergio; Sierra-Hernandez, Juan Manuel; Velasco, Jonas; Avina-Cervantes, Juan Gabriel; Avila-Garcia, Maria Susana; Rostro-Gonzalez, Horacio; Rojas-Laguna, Roberto

    2017-01-01

    This paper presents a new method based on Estimation of Distribution Algorithms (EDAs) to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications.

  8. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  9. SIML: a fast SIMD algorithm for calculating LINGO chemical similarities on GPUs and CPUs.

    PubMed

    Haque, Imran S; Pande, Vijay S; Walters, W Patrick

    2010-04-26

    LINGOs are a holographic measure of chemical similarity based on text comparison of SMILES strings. We present a new algorithm for calculating LINGO similarities amenable to parallelization on SIMD architectures (such as GPUs and vector units of modern CPUs). We show that it is nearly 3x as fast as existing algorithms on a CPU, and over 80x faster than existing methods when run on a GPU.

  10. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  11. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    PubMed Central

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059

  12. Fast algorithm for calculation of the moving tsunami wave height

    NASA Astrophysics Data System (ADS)

    Krivorotko, Olga; Kabanikhin, Sergey

    2014-05-01

    One of the most urgent problems of mathematical tsunami modeling is estimation of a tsunami wave height while a wave approaches to the coastal zone. There are two methods for solving this problem, namely, Airy-Green formula in one-dimensional case ° --- S(x) = S(0) 4 H(0)/H (x), and numerical solution of an initial-boundary value problem for linear shallow water equations ( { ηtt = div (gH (x,y)gradη), (x,y,t) ∈ ΩT := Ω ×(0,T); ( η|t=0 = q(x,y), ηt|t=0 = 0, (x,y ) ∈ Ω := (0,Lx)× (0,Ly ); (1) η|δΩT = 0. Here η(x,y,t) is the free water surface vertical displacement, H(x,y) is the depth at point (x,y), q(x,y) is the initial amplitude of a tsunami wave, S(x) is a moving tsunami wave height at point x. The main difficulty problem of tsunami modeling is a very big size of the computational domain ΩT. The calculation of the function η(x,y,t) of three variables in ΩT requires large computing resources. We construct a new algorithm to solve numerically the problem of determining the moving tsunami wave height which is based on kinematic-type approach and analytical representation of fundamental solution (2). The wave is supposed to be generated by the seismic fault of the bottom η(x,y,0) = g(y) ·θ(x), where θ(x) is a Heaviside theta-function. Let τ(x,y) be a solution of the eikonal equation 1 τ2x +τ2y = --, gH (x,y) satisfying initial conditions τ(0,y) = 0 and τx(0,y) = (gH (0,y))-1/2. Introducing new variables and new functions: ° -- z = τ(x,y), u(z,y,t) = ηt(x,y,t), b(z,y) = gH(x,y). We obtain an initial-boundary value problem in new variables from (1) ( 2 2 (2 bz- ) { utt = uzz + b uyy + 2b τyuzy + b(τxx + τyy) + 2b + 2bbyτy uz+ ( +2b(bzτy + by)uy, z,y- >2 0,t > 0,2 -1/2 u|t 0,t > 0. Then after some mathematical transformation we get the structure of the function u(x,y,t) in the form u(z,y,t) = S(z,y)·θ(t - z) + ˜u(z,y,t). (2) Here Å©(z,y,t) is a smooth function, S(z,y) is the solution of the problem: { S + b2τ S + (1b2(τ +

  13. Fast Grid Search Algorithm for Seismic Source Location

    SciTech Connect

    ALDRIDGE,DAVID F.

    2000-07-01

    The spatial and temporal origin of a seismic energy source are estimated with a first grid search technique. This approach has greater likelihood of finding the global rninirnum of the arrival time misiit function compared with conventional linearized iterative methods. Assumption of a homogeneous and isotropic seismic velocity model allows for extremely rapid computation of predicted arrival times, but probably limits application of the method to certain geologic environments and/or recording geometries. Contour plots of the arrival time misfit function in the vicinity of the global minimum are extremely useful for (i) quantizing the uncertainty of an estimated hypocenter solution and (ii) analyzing the resolving power of a given recording configuration. In particular, simultaneous inversion of both P-wave and S-wave arrival times appears to yield a superior solution in the sense of being more precisely localized in space and time. Future research with this algorithm may involve (i) investigating the utility of nonuniform residual weighting schemes, (ii) incorporating linear and/or layered velocity models into the calculation of predicted arrival times, and (iii) applying it toward rational design of microseismic monitoring networks.

  14. Fast registration algorithm using a variational principle for mutual information

    NASA Astrophysics Data System (ADS)

    Alexander, Murray E.; Summers, Randy

    2003-05-01

    A method is proposed for cross-modal image registration based on mutual information (MI) matching criteria. Both conventional and "normalized" MI are considered. MI may be expressed as a functional of a general image displacement field u. The variational principle for MI provides a field equation for u. The method employs a set of "registration points" consisting of a prescribed number of strongest edge points of the reference image, and minimizes an objective function D defined as the sum of the square residuals of the field equation for u at these points, where u is expressed as a sum over a set of basis functions (the affine model is presented here). D has a global minimum when the images are aligned, with a "basin of attraction" typically of width ~0.3 pixels. By pre-filtering with a low-pass filter, and using a multiresolution image pyramid, the basin may be significantly widened. The Levenberg-Marquardt algorithm is used to minimize D. Tests using randomly distributed misalignments of image pairs show that registration accuracy of 0.02 - 0.07 pixels is achieved, when using cubic B-splines for image representation, interpolation, and Parzen window estimation.

  15. Features that define the best ChIP-seq peak calling algorithms.

    PubMed

    Thomas, Reuben; Thomas, Sean; Holloway, Alisha K; Pollard, Katherine S

    2016-05-11

    Chromatin immunoprecipitation followed by sequencing (ChIP-seq) is an important tool for studying gene regulatory proteins, such as transcription factors and histones. Peak calling is one of the first steps in the analysis of these data. Peak calling consists of two sub-problems: identifying candidate peaks and testing candidate peaks for statistical significance. We surveyed 30 methods and identified 12 features of the two sub-problems that distinguish methods from each other. We picked six methods GEM, MACS2, MUSIC, BCP, Threshold-based method (TM) and ZINBA] that span this feature space and used a combination of 300 simulated ChIP-seq data sets, 3 real data sets and mathematical analyses to identify features of methods that allow some to perform better than the others. We prove that methods that explicitly combine the signals from ChIP and input samples are less powerful than methods that do not. Methods that use windows of different sizes are more powerful than the ones that do not. For statistical testing of candidate peaks, methods that use a Poisson test to rank their candidate peaks are more powerful than those that use a Binomial test. BCP and MACS2 have the best operating characteristics on simulated transcription factor binding data. GEM has the highest fraction of the top 500 peaks containing the binding motif of the immunoprecipitated factor, with 50% of its peaks within 10 base pairs of a motif. BCP and MUSIC perform best on histone data. These findings provide guidance and rationale for selecting the best peak caller for a given application.

  16. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  17. Fast one-pass algorithm to label objects and compute their features

    NASA Astrophysics Data System (ADS)

    Thai, Tan Q.

    1991-12-01

    In many image processing applications, labeling objects and computing their features for recognition are crucial steps for further analysis. In general these two steps are done separately. This paper proposes a new approach to label all objects and computer their features (such as moments, best fit ellipse, major and minor axis) in one pass. The basic idea of the algorithm is to detect interval overlaps among the line segments as the image is scanned from left to right, top to bottom. Ambiguity about an object's connectivity can also be resolved with the proposed algorithm. It is a fast algorithm and can be implemented on either serial or parallel processors.

  18. Comparing precorrected-FFT and fast multipole algorithms for solving three-dimensional potential integral equations

    SciTech Connect

    White, J.; Phillips, J.R.; Korsmeyer, T.

    1994-12-31

    Mixed first- and second-kind surface integral equations with (1/r) and {partial_derivative}/{partial_derivative} (1/r) kernels are generated by a variety of three-dimensional engineering problems. For such problems, Nystroem type algorithms can not be used directly, but an expansion for the unknown, rather than for the entire integrand, can be assumed and the product of the singular kernal and the unknown integrated analytically. Combining such an approach with a Galerkin or collocation scheme for computing the expansion coefficients is a general approach, but generates dense matrix problems. Recently developed fast algorithms for solving these dense matrix problems have been based on multipole-accelerated iterative methods, in which the fast multipole algorithm is used to rapidly compute the matrix-vector products in a Krylov-subspace based iterative method. Another approach to rapidly computing the dense matrix-vector products associated with discretized integral equations follows more along the lines of a multigrid algorithm, and involves projecting the surface unknowns onto a regular grid, then computing using the grid, and finally interpolating the results from the regular grid back to the surfaces. Here, the authors describe a precorrectted-FFT approach which can replace the fast multipole algorithm for accelerating the dense matrix-vector product associated with discretized potential integral equations. The precorrected-FFT method, described below, is an order n log(n) algorithm, and is asymptotically slower than the order n fast multipole algorithm. However, initial experimental results indicate the method may have a significant constant factor advantage for a variety of engineering problems.

  19. Preliminary versions of the MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.

  20. A fast algorithm for depth migration by the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Gao, Zhenghui; Sun, Jianguo; Sun, Xu; Wang, Xueqiu; Sun, Zhangqing; Liu, Zhiqiang

    2017-02-01

    Depth migration by the Gaussian beam summation method has no limitation on the seismic acquisition configuration. In the past, this migration method applied the steepest descent approximation to reduce the dimension of the integrals over the ray parameters at the cost of a precision loss. However, the simplified formula was still in the frequency domain, thereby impairing the computational efficiency. We present a new fast algorithm which can increase the computational efficiency without losing precision. To develop the fast algorithm, we change the order of the integrals and treat the two innermost integrals as a couple of two-dimensional continuous functions with respect to the real and imaginary parts of the total traveltime. A couple of lookup tables corresponding to the values of the two innermost integrals are constructed at the sampling points. The results of the two innermost integrals at a certain imaging point can be obtained through interpolation in the two constructed lookup tables. Both the numerical analysis and examples validate the precision and efficiency of the fast algorithm. With the advantage of handling rugged topography, we apply the fast algorithm to the 2D Canadian Foothills velocity model.

  1. Fast, Conservative Algorithm for Solving the Transonic Full-Potential Equation

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1980-01-01

    A fast, fully implicit approximate factorization algorithm designed to solve the conservative, transonic, full-potential equation in either two or three dimensions is described. The algorithm uses an upwind bias of the density coefficient for stability in supersonic regions. This provides an effective upwind difference of the streamwise terms for any orientation of the velocity vector (i.e., rotated differencing), thereby greatly enhancing the reliability of the present algorithm. A numerical transformation is used to establish an arbitrary body-fitted, finite-difference mesh. Computed results for both airfoils and simplified wings demonstrate substantial improvement in convergence speed for the new algorithm relative to standard successive-line over-relaxation algorithms.

  2. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    PubMed

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2017-03-01

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.

  3. a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen

    2016-06-01

    Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.

  4. A fast inter mode decision algorithm in H.264/AVC for IPTV broadcasting services

    NASA Astrophysics Data System (ADS)

    Kim, Geun-Yong; Yoon, Bin-Yeong; Ho, Yo-Sung

    2007-01-01

    The new video coding standard H.264/AVC employs the rate-distortion optimization (RDO) method for choosing the best coding mode. However, since it increases the encoder complexity tremendously, it is not suitable for real-time applications, such as IPTV broadcasting services. Therefore we need a fast mode decision algorithm to reduce its encoding time. In this paper, we propose a fast mode decision algorithm considering quantization parameter (QP) because we have noticed that the frequency of best modes depends on QP. In order to consider these characteristics, we use the coded block pattern (CBP) that has "0" value when all quantized discrete cosine transform (DCT) coefficients are zero. We also use both the early SKIP mode and early 16x16 mode decisions. Experimental results show that the proposed algorithm reduces the encoding time by 74.6% for the baseline profile and 72.8% for the main profile, compared to the H.264/AVC reference software.

  5. A fast algorithm for muon track reconstruction and its application to the ANTARES neutrino telescope

    NASA Astrophysics Data System (ADS)

    Aguilar, J. A.; Al Samarai, I.; Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Anvar, S.; Ardid, M.; Assis Jesus, A. C.; Astraatmadja, T.; Aubert, J.-J.; Auer, R.; Baret, B.; Basa, S.; Bazzotti, M.; Bertin, V.; Biagi, S.; Bigongiari, C.; Bogazzi, C.; Bou-Cabo, M.; Bouwhuis, M. C.; Brown, A. M.; Brunner, J.; Busto, J.; Camarena, F.; Capone, A.; Cârloganu, C.; Carminati, G.; Carr, J.; Cecchini, S.; Charvis, Ph.; Chiarusi, T.; Circella, M.; Coniglione, R.; Costantini, H.; Cottini, N.; Coyle, P.; Curtil, C.; Decowski, M. P.; Dekeyser, I.; Deschamps, A.; Distefano, C.; Donzaud, C.; Dornic, D.; Dorosti, Q.; Drouhin, D.; Eberl, T.; Emanuele, U.; Ernenwein, J.-P.; Escoffier, S.; Fehr, F.; Flaminio, V.; Fritsch, U.; Fuda, J.-L.; Galatà, S.; Gay, P.; Giacomelli, G.; Gómez-González, J. P.; Graf, K.; Guillard, G.; Halladjian, G.; Hallewell, G.; van Haren, H.; Heijboer, A. J.; Hello, Y.; Hernández-Rey, J. J.; Herold, B.; Hößl, J.; Hsu, C. C.; de Jong, M.; Kadler, M.; Kalantar-Nayestanaki, N.; Kalekin, O.; Kappes, A.; Katz, U.; Kooijman, P.; Kopper, C.; Kouchner, A.; Kulikovskiy, V.; Lahmann, R.; Lamare, P.; Larosa, G.; Lefèvre, D.; Lim, G.; Lo Presti, D.; Loehner, H.; Loucatos, S.; Lucarelli, F.; Mangano, S.; Marcelin, M.; Margiotta, A.; Martinez-Mora, J. A.; Mazure, A.; Meli, A.; Montaruli, T.; Morganti, M.; Moscoso, L.; Motz, H.; Naumann, C.; Neff, M.; Palioselitis, D.; Păvălaş, G. E.; Payre, P.; Petrovic, J.; Picot-Clemente, N.; Picq, C.; Popa, V.; Pradier, T.; Presani, E.; Racca, C.; Reed, C.; Riccobene, G.; Richardt, C.; Richter, R.; Rostovtsev, A.; Rujoiu, M.; Russo, G. V.; Salesa, F.; Sapienza, P.; Schöck, F.; Schuller, J.-P.; Shanidze, R.; Simeone, F.; Spiess, A.; Spurio, M.; Steijger, J. J. M.; Stolarczyk, Th.; Taiuti, M.; Tamburini, C.; Tasca, L.; Toscano, S.; Vallage, B.; Van Elewyck, V.; Vannoni, G.; Vecchi, M.; Vernin, P.; Wijnker, G.; de Wolf, E.; Yepes, H.; Zaborov, D.; Zornoza, J. D.; Zúñiga, J.

    2011-04-01

    An algorithm is presented, that provides a fast and robust reconstruction of neutrino induced upward-going muons and a discrimination of these events from downward-going atmospheric muon background in data collected by the ANTARES neutrino telescope. The algorithm consists of a hit merging and hit selection procedure followed by fitting steps for a track hypothesis and a point-like light source. It is particularly well-suited for real time applications such as online monitoring and fast triggering of optical follow-up observations for multi-messenger studies. The performance of the algorithm is evaluated with Monte Carlo simulations and various distributions are compared with that obtained in ANTARES data.

  6. A fast and accurate algorithm for high-frequency trans-ionospheric path length determination

    NASA Astrophysics Data System (ADS)

    Wijaya, Dudy D.

    2015-12-01

    This paper presents a fast and accurate algorithm for high-frequency trans-ionospheric path length determination. The algorithm is merely based on the solution of the Eikonal equation that is solved using the conformal theory of refraction. The main advantages of the algorithm are summarized as follows. First, the algorithm can determine the optical path length without iteratively adjusting both elevation and azimuth angles and, hence, the computational time can be reduced. Second, for the same elevation and azimuth angles, the algorithm can simultaneously determine the phase and group of both ordinary and extra-ordinary optical path lengths for different frequencies. Results from numerical simulations show that the computational time required by the proposed algorithm to accurately determine 8 different optical path lengths is almost 17 times faster than that required by a 3D ionospheric ray-tracing algorithm. It is found that the computational time to determine multiple optical path lengths is the same with that for determining a single optical path length. It is also found that the proposed algorithm is capable of determining the optical path lengths with millimeter level of accuracies, if the magnitude of the squared ratio of the plasma frequency to the transmitted frequency is less than 1.33× 10^{-3}, and hence the proposed algorithm is applicable for geodetic applications.

  7. Raw data based image processing algorithm for fast detection of surface breaking cracks

    NASA Astrophysics Data System (ADS)

    Sruthi Krishna K., P.; Puthiyaveetil, Nithin; Kidangan, Renil; Unnikrishnakurup, Sreedhar; Zeigler, Mathias; Myrach, Philipp; Balasubramaniam, Krishnan; Biju, P.

    2017-02-01

    The aim of this work is to illustrate the contribution of signal processing techniques in the field of Non-Destructive Evaluation. A component's life evaluation is inevitably related to the presence of flaws in it. The detection and characterization of cracks prior to damage is a technologically and economically significant task and is of very importance when it comes to safety-relevant measures. The Laser Thermography is the most effective and advanced thermography method for Non-Destructive Evaluation. High capability for the detection of surface cracks and for the characterization of the geometry of artificial surface flaws in metallic samples of laser thermography is particularly encouraging. This is one of the non-contacting, fast and real time detection method. The presence of a vertical surface breaking crack will disturb the thermal footprint. The data processing method plays vital role in fast detection of the surface and sub-surface cracks. Currently in laser thermographic inspection lacks a compromising data processing algorithm which is necessary for the fast crack detection and also the analysis of data is done as part of post processing. In this work we introduced a raw data based image processing algorithm which results precise, better and fast crack detection. The algorithm we developed gives better results in both experimental and modeling data. By applying this algorithm we carried out a detailed investigation variation of thermal contrast with crack parameters like depth and width. The algorithm we developed is applied for various surface temperature data from the 2D scanning model and also validated credibility of algorithm with experimental data.

  8. Dynamic Multiple-Threshold Call Admission Control Based on Optimized Genetic Algorithm in Wireless/Mobile Networks

    NASA Astrophysics Data System (ADS)

    Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin

    Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.

  9. Fast phase unwrapping algorithm based on region partition for structured light vision measurement

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Su, Hang

    2014-04-01

    Phase unwrapping is a key problem of phase-shifting profilometry vision measurement for complex object surface shapes. The simple path-following phase unwrapping algorithm is fast but has serious unwrapping error for complex shapes. The Goldstein+flood phase unwrapping algorithm can handle some complex shape object measurement; however, it is time consuming. We propose a fast phase unwrapping algorithm based on region partition according to a quality map of wrapped phase. In this algorithm, wrapped phase image is divided into several regions using partition thresholds, which are determined according to histogram of quality value. Each region is unwrapped by using a simple path-following phase algorithm and several groups with different priorities are generated. These groups are merged according to their priorities from high to low order and a final absolute phase is obtained. The proposed method is applied to wrapped phase images of three objects with and without noise. Experiments show that the proposed method is much faster, more accurate, and robust to noise than the Goldstein+flood algorithm in unwrapping complex phase image.

  10. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2015-01-01

    Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs) remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA). Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  11. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  12. The Block V Receiver fast acquisition algorithm for the Galileo S-band mission

    NASA Technical Reports Server (NTRS)

    Aung, M.; Hurd, W. J.; Buu, C. M.; Berner, J. B.; Stephens, S. A.; Gevargiz, J. M.

    1994-01-01

    A fast acquisition algorithm for the Galileo suppressed carrier, subcarrier, and data symbol signals under low data rate, signal-to-noise ratio (SNR) and high carrier phase-noise conditions has been developed. The algorithm employs a two-arm fast Fourier transform (FFT) method utilizing both the in-phase and quadrature-phase channels of the carrier. The use of both channels results in an improved SNR in the FFT acquisition, enabling the use of a shorter FFT period over which the carrier instability is expected to be less significant. The use of a two-arm FFT also enables subcarrier and symbol acquisition before carrier acquisition. With the subcarrier and symbol loops locked first, the carrier can be acquired from an even shorter FFT period. Two-arm tracking loops are employed to lock the subcarrier and symbol loops parameter modification to achieve the final (high) loop SNR in the shortest time possible. The fast acquisition algorithm is implemented in the Block V Receiver (BVR). This article describes the complete algorithm design, the extensive computer simulation work done for verification of the design and the analysis, implementation issues in the BVR, and the acquisition times of the algorithm. In the expected case of the Galileo spacecraft at Jupiter orbit insertion PD/No equals 14.6 dB-Hz, R(sym) equals 16 symbols per sec, and the predicted acquisition time of the algorithm (to attain a 0.2-dB degradation from each loop to the output symbol SNR) is 38 sec.

  13. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    PubMed Central

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  14. A fast and high performance multiple data integration algorithm for identifying human disease genes

    PubMed Central

    2015-01-01

    Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620

  15. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.

    PubMed

    Vestergaard, Christian L; Génois, Mathieu

    2015-10-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.

  16. A Fast Color Image Encryption Algorithm Using 4-Pixel Feistel Structure

    PubMed Central

    Yao, Wang; Wu, Faguo; Zhang, Xiao; Zheng, Zhiming; Wang, Zhao; Wang, Wenhua; Qiu, Wangjie

    2016-01-01

    Algorithms using 4-pixel Feistel structure and chaotic systems have been shown to resolve security problems caused by large data capacity and high correlation among pixels for color image encryption. In this paper, a fast color image encryption algorithm based on the modified 4-pixel Feistel structure and multiple chaotic maps is proposed to improve the efficiency of this type of algorithm. Two methods are used. First, a simple round function based on a piecewise linear function and tent map are used to reduce computational cost during each iteration. Second, the 4-pixel Feistel structure reduces round number by changing twist direction securely to help the algorithm proceed efficiently. While a large number of simulation experiments prove its security performance, additional special analysis and a corresponding speed simulation show that these two methods increase the speed of the proposed algorithm (0.15s for a 256*256 color image) to twice that of an algorithm with a similar structure (0.37s for the same size image). Additionally, the method is also faster than other recently proposed algorithms. PMID:27824894

  17. A Fast Color Image Encryption Algorithm Using 4-Pixel Feistel Structure.

    PubMed

    Yao, Wang; Wu, Faguo; Zhang, Xiao; Zheng, Zhiming; Wang, Zhao; Wang, Wenhua; Qiu, Wangjie

    2016-01-01

    Algorithms using 4-pixel Feistel structure and chaotic systems have been shown to resolve security problems caused by large data capacity and high correlation among pixels for color image encryption. In this paper, a fast color image encryption algorithm based on the modified 4-pixel Feistel structure and multiple chaotic maps is proposed to improve the efficiency of this type of algorithm. Two methods are used. First, a simple round function based on a piecewise linear function and tent map are used to reduce computational cost during each iteration. Second, the 4-pixel Feistel structure reduces round number by changing twist direction securely to help the algorithm proceed efficiently. While a large number of simulation experiments prove its security performance, additional special analysis and a corresponding speed simulation show that these two methods increase the speed of the proposed algorithm (0.15s for a 256*256 color image) to twice that of an algorithm with a similar structure (0.37s for the same size image). Additionally, the method is also faster than other recently proposed algorithms.

  18. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    SciTech Connect

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-09-15

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  19. Fast maximum intensity projection algorithm using shear warp factorization and reduced resampling.

    PubMed

    Fang, Laifa; Wang, Yi; Qiu, Bensheng; Qian, Yuancheng

    2002-04-01

    Maximal intensity projection (MIP) is routinely used to view MRA and other volumetric angiographic data. The straightforward implementation of MIP is ray casting that traces a volumetric data set in a computationally expensive manner. This article reports a fast MIP algorithm using shear warp factorization and reduced resampling that drastically reduced the redundancy in the computations for projection, thereby speeding up MIP by more than 10 times.

  20. Fast algorithm of byte-to-byte wavelet transform for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Sossa Azuela, Juan H.; Ramirez, Pablo M.

    2002-11-01

    A new fast algorithm of 2D DWT transform is presented. The algorithm operates on byte represented images and performs image transformation with the Cohen-Daubechies-Feauveau wavelet of the second order. It uses the lifting scheme for the calculations. The proposed algorithm is based on the "checkerboard" computation scheme for non-separable 2D wavelet. The problem of data extension near the image borders is resolved computing 1D Haar wavelet in the vicinity of the borders. With the checkerboard splitting, at each level of decomposition only one detail image is produced that simplify the further analysis for data compression. The calculations are rather simple, without any floating point operation allowing the implementation of the designed algorithm in fixed point DSP processors for fast, near real time processing. The proposed algorithm does not possesses perfect restoration of the processed data because of rounding that is introduced at each level of decomposition/restoration to perform operations with byte represented data. The designed algorithm was tested on different images. The criterion to estimate quantitatively the quality of the restored images was the well known PSNR. For the visual quality estimation the error maps between original and restored images were calculated. The obtained simulation results show that the visual and quantitative quality of the restored images is degraded with number of decomposition level increasing but is sufficiently high even after 6 levels. The introduced distortion are concentrated in the vicinity of high spatial activity details and are absent in the homogeneous regions. The designed algorithm can be used for image lossy compression and in noise suppression applications.

  1. A Universal Fast Algorithm for Sensitivity-Based Structural Damage Detection

    PubMed Central

    Yang, Q. W.; Liu, J. K.; Li, C. H.; Liang, C. F.

    2013-01-01

    Structural damage detection using measured response data has emerged as a new research area in civil, mechanical, and aerospace engineering communities in recent years. In this paper, a universal fast algorithm is presented for sensitivity-based structural damage detection, which can quickly improve the calculation accuracy of the existing sensitivity-based technique without any high-order sensitivity analysis or multi-iterations. The key formula of the universal fast algorithm is derived from the stiffness and flexibility matrix spectral decomposition theory. With the introduction of the key formula, the proposed method is able to quickly achieve more accurate results than that obtained by the original sensitivity-based methods, regardless of whether the damage is small or large. Three examples are used to demonstrate the feasibility and superiority of the proposed method. It has been shown that the universal fast algorithm is simple to implement and quickly gains higher accuracy over the existing sensitivity-based damage detection methods. PMID:24453815

  2. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.

    PubMed

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-03-09

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.

  3. A fast, robust algorithm for power line interference cancellation in neural recording

    NASA Astrophysics Data System (ADS)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed

  4. Fast Fourier transform based direct integration algorithm for the linear canonical transform

    NASA Astrophysics Data System (ADS)

    Wang, Dayong; Liu, Changgeng; Wang, Yunxin; Zhao, Jie

    2011-03-01

    The linear canonical transform(LCT) is a parameterized linear integral transform, which is the general case of many well-known transforms such as the Fourier transform(FT), the fractional Fourier transform(FRT) and the Fresnel transform(FST). These integral transforms are of great importance in wave propagation problems because they are the solutions of the wave equation under a variety of circumstances. In optics, the LCT can be used to model paraxial free space propagation and other quadratic phase systems such as lens and graded-index media. A number of algorithms have been presented to fast compute the LCT. When they are used to compute the LCT, the sampling period in the transform domain is dependent on that in the signal domain. This drawback limits their applicability in some cases such as color digital holography. In this paper, a Fast-Fourier-Transform-based Direct Integration algorithm(FFT-DI) for the LCT is presented. The FFT-DI is a fast computational method of the Direct Integration(DI) for the LCT. It removes the dependency of the sampling period in the transform domain on that in the signal domain. Simulations and experimental results are presented to validate this idea.

  5. Fast Fourier transform based direct integration algorithm for the linear canonical transform

    NASA Astrophysics Data System (ADS)

    Wang, Dayong; Liu, Changgeng; Wang, Yunxin; Zhao, Jie

    2010-07-01

    The linear canonical transform(LCT) is a parameterized linear integral transform, which is the general case of many well-known transforms such as the Fourier transform(FT), the fractional Fourier transform(FRT) and the Fresnel transform(FST). These integral transforms are of great importance in wave propagation problems because they are the solutions of the wave equation under a variety of circumstances. In optics, the LCT can be used to model paraxial free space propagation and other quadratic phase systems such as lens and graded-index media. A number of algorithms have been presented to fast compute the LCT. When they are used to compute the LCT, the sampling period in the transform domain is dependent on that in the signal domain. This drawback limits their applicability in some cases such as color digital holography. In this paper, a Fast-Fourier-Transform-based Direct Integration algorithm(FFT-DI) for the LCT is presented. The FFT-DI is a fast computational method of the Direct Integration(DI) for the LCT. It removes the dependency of the sampling period in the transform domain on that in the signal domain. Simulations and experimental results are presented to validate this idea.

  6. Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis

    NASA Astrophysics Data System (ADS)

    Xu, Shiyu; Zhang, Zhenxi; Chen, Ying

    2014-03-01

    Statistical iterative reconstruction exhibits particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description in transmission tomography system. However, to solve the objective function is computationally intensive compared to analytical reconstruction methods due to multiple iterations needed for convergence and each iteration involving forward/back-projections by using a complex geometric system model. Optimization transfer (OT) is a general algorithm converting a high dimensional optimization to a parallel 1-D update. OT-based algorithm provides a monotonic convergence and a parallel computing framework but slower convergence rate especially around the global optimal. Based on an indirect estimation on the spectrum of the OT convergence rate matrix, we proposed a successively increasing factor- scaled optimization transfer (OT) algorithm to seek an optimal step size for a faster rate. Compared to a representative OT based method such as separable parabolic surrogate with pre-computed curvature (PC-SPS), our algorithm provides comparable image quality (IQ) with fewer iterations. Each iteration retains a similar computational cost to PC-SPS. The initial experiment with a simulated Digital Breast Tomosynthesis (DBT) system shows that a total 40% computing time is saved by the proposed algorithm. In general, the successively increasing factor-scaled OT exhibits a tremendous potential to be a iterative method with a parallel computation, a monotonic and global convergence with fast rate.

  7. Fast automated yeast cell counting algorithm using bright-field and fluorescence microscopic images

    PubMed Central

    2013-01-01

    Background The faithful determination of the concentration and viability of yeast cells is important for biological research as well as industry. To this end, it is important to develop an automated cell counting algorithm that can provide not only fast but also accurate and precise measurement of yeast cells. Results With the proposed method, we measured the precision of yeast cell measurements by using 0%, 25%, 50%, 75% and 100% viability samples. As a result, the actual viability measured with the proposed yeast cell counting algorithm is significantly correlated to the theoretical viability (R2 = 0.9991). Furthermore, we evaluated the performance of our algorithm in various computing platforms. The results showed that the proposed algorithm could be feasible to use with low-end computing platforms without loss of its performance. Conclusions Our yeast cell counting algorithm can rapidly provide the total number and the viability of yeast cells with exceptional accuracy and precision. Therefore, we believe that our method can become beneficial for a wide variety of academic field and industries such as biotechnology, pharmaceutical and alcohol production. PMID:24215650

  8. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  9. A fast rank-reduction algorithm for three-dimensional seismic data interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Yongna; Yu, Siwei; Liu, Lina; Ma, Jianwei

    2016-09-01

    Rank-reduction methods have been successfully used for seismic data interpolation and noise attenuation. However, highly intense computation is required for singular value decomposition (SVD) in most rank-reduction methods. In this paper, we propose a simple yet efficient interpolation algorithm, which is based on the Hankel matrix, for randomly missing traces. Following the multichannel singular spectrum analysis (MSSA) technique, we first transform the seismic data into a low-rank block Hankel matrix for each frequency slice. Then, a fast orthogonal rank-one matrix pursuit (OR1MP) algorithm is employed to minimize the low-rank constraint of the block Hankel matrix. In the new algorithm, only the left and right top singular vectors are needed to be computed, thereby, avoiding the complexity of computation required for SVD. Thus, we improve the calculation efficiency significantly. Finally, we anti-average the rank-reduction block Hankel matrix and obtain the reconstructed data in the frequency domain. Numerical experiments on 3D seismic data show that the proposed interpolation algorithm provides much better performance than the traditional MSSA algorithm in computational speed, especially for large-scale data processing.

  10. Peak detection in fiber Bragg grating using a fast phase correlation algorithm

    NASA Astrophysics Data System (ADS)

    Lamberti, A.; Vanlanduit, S.; De Pauw, B.; Berghmans, F.

    2014-05-01

    Fiber Bragg grating sensing principle is based on the exact tracking of the peak wavelength location. Several peak detection techniques have already been proposed in literature. Among these, conventional peak detection (CPD) methods such as the maximum detection algorithm (MDA), do not achieve very high precision and accuracy, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. On the other hand, recently proposed algorithms, like the cross-correlation demodulation algorithm (CCA), are more precise and accurate but require higher computational effort. To overcome these limitations, we developed a novel fast phase correlation algorithm (FPC) which performs as well as the CCA, being at the same time considerably faster. This paper presents the FPC technique and analyzes its performances for different SNR and wavelength resolutions. Using simulations and experiments, we compared the FPC with the MDA and CCA algorithms. The FPC detection capabilities were as precise and accurate as those of the CCA and considerably better than those of the CPD. The FPC computational time was up to 50 times lower than CCA, making the FPC a valid candidate for future implementation in real-time systems.

  11. A Fast Sphere Decoding Algorithm for Space-Frequency Block Codes

    NASA Astrophysics Data System (ADS)

    Safar, Zoltan; Su, Weifeng; Liu, K. J. Ray

    2006-12-01

    The recently proposed space-frequency-coded MIMO-OFDM systems have promised considerable performance improvement over single-antenna systems. However, in order to make multiantenna OFDM systems an attractive choice for practical applications, implementation issues such as decoding complexity must be addressed successfully. In this paper, we propose a computationally efficient decoding algorithm for space-frequency block codes. The central part of the algorithm is a modulation-independent sphere decoding framework formulated in the complex domain. We develop three decoding approaches: a modulation-independent approach applicable to any memoryless modulation method, a QAM-specific and a PSK-specific fast decoding algorithm performing nearest-neighbor signal point search. The computational complexity of the algorithms is investigated via both analysis and simulation. The simulation results demonstrate that the proposed algorithm can significantly reduce the decoding complexity. We observe up to 75% reduction in the required FLOP count per code block compared to previously existing methods without noticeable performance degradation.

  12. Fast dose algorithm for generation of dose coverage probability for robustness analysis of fractionated radiotherapy

    NASA Astrophysics Data System (ADS)

    Tilly, David; Ahnesjö, Anders

    2015-07-01

    A fast algorithm is constructed to facilitate dose calculation for a large number of randomly sampled treatment scenarios, each representing a possible realisation of a full treatment with geometric, fraction specific displacements for an arbitrary number of fractions. The algorithm is applied to construct a dose volume coverage probability map (DVCM) based on dose calculated for several hundred treatment scenarios to enable the probabilistic evaluation of a treatment plan. For each treatment scenario, the algorithm calculates the total dose by perturbing a pre-calculated dose, separately for the primary and scatter dose components, for the nominal conditions. The ratio of the scenario specific accumulated fluence, and the average fluence for an infinite number of fractions is used to perturb the pre-calculated dose. Irregularities in the accumulated fluence may cause numerical instabilities in the ratio, which is mitigated by regularisation through convolution with a dose pencil kernel. Compared to full dose calculations the algorithm demonstrates a speedup factor of ~1000. The comparisons to full calculations show a 99% gamma index (2%/2 mm) pass rate for a single highly modulated beam in a virtual water phantom subject to setup errors during five fractions. The gamma comparison shows a 100% pass rate in a moving tumour irradiated by a single beam in a lung-like virtual phantom. DVCM iso-probability lines computed with the fast algorithm, and with full dose calculation for each of the fractions, for a hypo-fractionated prostate case treated with rotational arc therapy treatment were almost indistinguishable.

  13. A Fast and Precise Indoor Localization Algorithm Based on an Online Sequential Extreme Learning Machine †

    PubMed Central

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-01

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427

  14. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine.

    PubMed

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-15

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics.

  15. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  16. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  17. A fast and frugal algorithm to strengthen diagnosis and treatment decisions for catheter-associated bacteriuria

    PubMed Central

    Naik, Aanand D.; Skelton, Felicia; Amspoker, Amber B.; Glasgow, Russell A.; Trautner, Barbara W.

    2017-01-01

    Objectives Guidelines for managing catheter-associated urinary tract infection (CAUTI) and asymptomatic bacteria (ASB) are poorly translated into routine care due in part to cognitive diagnostic errors. This study determines if the accuracy for CAUTI and ASB diagnosis and treatment improves after implementation of a fast and frugal algorithm compared with traditional education methods. Materials and methods A pre and post-intervention with contemporaneous comparison site involving inpatient and long term care wards at two regional Veterans Affairs Systems in United States. Participants included 216 internal medicine residents and 16 primary care clinicians. Intervention clinicians received training with a fast and frugal algorithm. Comparison site clinicians received guidelines education. Diagnosis and treatment accuracy compared with a criterion standard was assessed during similar three-month, pre- and post-intervention periods. Sensitivity, specificity, and likelihood ratios were compared for both periods at each site. Results Bacteriuria management was evaluated against criterion standard in 196 cases pre-implementation and 117 cases post-implementation. Accuracy of bacteriuria management among intervention participants was significantly higher, post-implementation, than those at the comparison site (Intervention: positive likelihood ratio (LR+) = 8.5, specificity = 0.89, 95% confidence interval (CI) = 0.78−1.00; comparison: LR+ = 4.62, specificity (95%CI) = 0.79 (0.63−0.95). Further, improvements at the intervention site were statistically significant (pre-implementation: LR+ = 2.1, specificity (95%CI) = 0.60 (0.50−0.71); post-implementation: LR+ = 8.5, specificity (95%CI) = 0.89 (0.78−1.00). At both sites, there were similar improvements in negative LR from pre- to post-implementation: [Intervention site = 0.28 to 0.08; comparison site = 0.13 to 0.04]. Inappropriate management of ASB declined markedly from 32 (40%) to 3 (11%) cases at the intervention

  18. Fast mapping algorithm of lighting spectrum and GPS coordinates for a large area

    NASA Astrophysics Data System (ADS)

    Lin, Chih-Wei; Hsu, Ke-Fang; Hwang, Jung-Min

    2016-09-01

    In this study, we propose a fast rebuild technology for evaluating light quality in large areas. Outdoor light quality, which is measured by illuminance uniformity and the color rendering index, is difficult to conform after improvement. We develop an algorithm for a lighting quality mapping system and coordinates using a micro spectrometer and GPS tracker integrated with a quadcopter or unmanned aerial vehicle. After cruising at a constant altitude, lighting quality data is transmitted and immediately mapped to evaluate the light quality in a large area.

  19. ADaM: augmenting existing approximate fast matching algorithms with efficient and exact range queries

    PubMed Central

    2014-01-01

    Background Drug discovery, disease detection, and personalized medicine are fast-growing areas of genomic research. With the advancement of next-generation sequencing techniques, researchers can obtain an abundance of data for many different biological assays in a short period of time. When this data is error-free, the result is a high-quality base-pair resolution picture of the genome. However, when the data is lossy the heuristic algorithms currently used when aligning next-generation sequences causes the corresponding accuracy to drop. Results This paper describes a program, ADaM (APF DNA Mapper) which significantly increases final alignment accuracy. ADaM works by first using an existing program to align "easy" sequences, and then using an algorithm with accuracy guarantees (the APF) to align the remaining sequences. The final result is a technique that increases the mapping accuracy from only 60% to over 90% for harder-to-align sequences. PMID:25079667

  20. Lazy skip-lists: An algorithm for fast hybridization-expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sémon, P.; Yee, Chuck-Hou; Haule, Kristjan; Tremblay, A.-M. S.

    2014-08-01

    The solution of a generalized impurity model lies at the heart of electronic structure calculations with dynamical mean field theory. In the strongly correlated regime, the method of choice for solving the impurity model is the hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB). Enhancements to the CT-HYB algorithm are critical for bringing new physical regimes within reach of current computational power. Taking advantage of the fact that the bottleneck in the algorithm is a product of hundreds of matrices, we present optimizations based on the introduction and combination of two concepts of more general applicability: (a) skip lists and (b) fast rejection of proposed configurations based on matrix bounds. Considering two very different test cases with d electrons, we find speedups of ˜25 up to ˜500 compared to the direct evaluation of the matrix product. Even larger speedups are likely with f electron systems and with clusters of correlated atoms.

  1. Statistical properties of an algorithm used for illicit substance detection by fast-neutron transmission

    SciTech Connect

    Smith, D.L.; Sagalovsky, L.; Micklich, B.J.; Harper, M.K.; Novick, A.H.

    1994-06-01

    A least-squares algorithm developed for analysis of fast-neutron transmission data resulting from non-destructive interrogation of sealed luggage and containers is subjected to a probabilistic interpretation. The approach is to convert knowledge of uncertainties in the derived areal elemental densities, as provided by this algorithm, into probability information that can be used to judge whether an interrogated object is either benign or potentially contains an illicit substance that should be investigated further. Two approaches are considered in this paper. One involves integration of a normalized probability density function associated with the least-squares solution. The other tests this solution against a hypothesis that the interrogated object indeed contains illicit material. This is accomplished by an application of the F-distribution from statistics. These two methods of data interpretation are applied to specific sets of neutron transmission results produced by Monte Carlo simulation.

  2. A new cross-diamond search algorithm for fast block motion estimation

    NASA Astrophysics Data System (ADS)

    Zhu, Shiping; Shen, Xiaodong

    2008-10-01

    In block motion estimation, search patterns have a large impact on the searching speed and quality of performance. Based on motion vector distribution characteristics of real world video sequences, we propose a new cross-diamond search algorithm (NCDS) using cross-search patterns before large/small diamond search patterns in this paper. NCDS employs halfway technique to achieve significant speedup on sequence with (quasi-) stationary blocks. NCDS employs Modified Partial Distortion Criterion (MPDC), which results in fewer search points with similar distortion. Experimental results show that the improvements of NCDS over CDS can be up to a 16% gain on speedup while similar prediction accuracy is maintained, and NCDS provides faster searching speed and smaller distortions than other popular fast block-matching algorithms.

  3. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  4. A Fast Algorithm for 2D DOA Estimation Using an Omnidirectional Sensor Array.

    PubMed

    Nie, Weike; Xu, Kaijie; Feng, Dazheng; Wu, Chase Qishi; Hou, Aiqin; Yin, Xiaoyan

    2017-03-04

    The traditional 2D MUSIC algorithm fixes the azimuth or the elevation, and searches for the other without considering the directions of sources. A spectrum peak diffusion effect phenomenon is observed and may be utilized to detect the approximate directions of sources. Accordingly, a fast 2D MUSIC algorithm, which performs azimuth and elevation simultaneous searches (henceforth referred to as AESS) based on only three rounds of search is proposed. Firstly, AESS searches along a circle to detect the approximate source directions. Then, a subsequent search is launched along several straight lines based on these approximate directions. Finally, the 2D Direction of Arrival (DOA) of each source is derived by searching on several small concentric circles. Unlike the 2D MUSIC algorithm, AESS does not fix any azimuth and elevation parameters. Instead, the adjacent point of each search possesses different azimuth and elevation, i.e., azimuth and elevation are simultaneously searched to ensure that the search path is minimized, and hence the total spectral search over the angular field of view is avoided. Simulation results demonstrate the performance characters of the proposed AESS over some existing algorithms.

  5. A fast 3D image simulation algorithm of moving target for scanning laser radar

    NASA Astrophysics Data System (ADS)

    Li, Jicheng; Shi, Zhiguang; Chen, Xiao; Chen, Dong

    2014-10-01

    Scanning Laser Radar has been widely used in many military and civil areas. Usually there are relative movements between the target and the radar, so the moving target image modeling and simulation is an important research content in the field of signal processing and system design of scan-imaging laser radar. In order to improve the simulation speed and hold the accuracy of the image simulation simultaneously, a novel fast simulation algorithm is proposed in this paper. Firstly, for moving target or varying scene, an inequation that can judge the intersection relations between the pixel and target bins is obtained by deriving the projection of target motion trajectories on the image plane. Then, by utilizing the time subdivision and approximate treatments, the potential intersection relations of pixel and target bins are determined. Finally, the goal of reducing the number of intersection operations could be achieved by testing all the potential relations and finding which of them is real intersection. To test the method's performance, we perform computer simulations of both the new proposed algorithm and a literature's algorithm for six targets. The simulation results show that the two algorithm yield the same imaging result, whereas the number of intersection operations of former is equivalent to only 1% of the latter, and the calculation efficiency increases a hundredfold. The novel simulation acceleration idea can be applied extensively in other more complex application environments and provide equally acceleration effect. It is very suitable for the case to produce a great large number of laser radar images.

  6. Fast-kick-off monotonically convergent algorithm for searching optimal control fields

    SciTech Connect

    Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel; Chu, Shih-I

    2011-09-15

    This Rapid Communication presents a fast-kick-off search algorithm for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. The algorithm is based on a recently formulated monotonically convergent scheme [T.-S. Ho and H. Rabitz, Phys. Rev. E 82, 026703 (2010)]. Specifically, the local temporal refinement of the control field at each iteration is weighted by a fractional inverse power of the instantaneous overlap of the backward-propagating wave function, associated with the target state and the control field from the previous iteration, and the forward-propagating wave function, associated with the initial state and the concurrently refining control field. Extensive numerical simulations for controls of vibrational transitions and ultrafast electron tunneling show that the new algorithm not only greatly improves the search efficiency but also is able to attain good monotonic convergence quality when further frequency constraints are required. The algorithm is particularly effective when the corresponding control dynamics involves a large number of energy levels or ultrashort control pulses.

  7. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    PubMed

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2017-01-20

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  8. A Fast Algorithm for 2D DOA Estimation Using an Omnidirectional Sensor Array

    PubMed Central

    Nie, Weike; Xu, Kaijie; Feng, Dazheng; Wu, Chase Qishi; Hou, Aiqin; Yin, Xiaoyan

    2017-01-01

    The traditional 2D MUSIC algorithm fixes the azimuth or the elevation, and searches for the other without considering the directions of sources. A spectrum peak diffusion effect phenomenon is observed and may be utilized to detect the approximate directions of sources. Accordingly, a fast 2D MUSIC algorithm, which performs azimuth and elevation simultaneous searches (henceforth referred to as AESS) based on only three rounds of search is proposed. Firstly, AESS searches along a circle to detect the approximate source directions. Then, a subsequent search is launched along several straight lines based on these approximate directions. Finally, the 2D Direction of Arrival (DOA) of each source is derived by searching on several small concentric circles. Unlike the 2D MUSIC algorithm, AESS does not fix any azimuth and elevation parameters. Instead, the adjacent point of each search possesses different azimuth and elevation, i.e., azimuth and elevation are simultaneously searched to ensure that the search path is minimized, and hence the total spectral search over the angular field of view is avoided. Simulation results demonstrate the performance characters of the proposed AESS over some existing algorithms. PMID:28273851

  9. Fast double-phase retrieval in Fresnel domain using modified Gerchberg-Saxton algorithm for lensless optical security systems.

    PubMed

    Hwang, Hone-Ene; Chang, Hsuan T; Lie, Wen-Nung

    2009-08-03

    A novel fast double-phase retrieval algorithm for lensless optical security systems based on the Fresnel domain is presented in this paper. Two phase-only masks are efficiently determined by using a modified Gerchberg-Saxton algorithm, in which two cascaded Fresnel transforms are replaced by one Fourier transform with compensations to reduce the consumed computations. Simulation results show that the proposed algorithm substantially speeds up the iterative process, while keeping the reconstructed image highly correlated with the original one.

  10. FAST-PT II: an algorithm to calculate convolution integrals of general tensor quantities in cosmological perturbation theory

    NASA Astrophysics Data System (ADS)

    Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.; Hirata, Christopher M.

    2017-02-01

    Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies in the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.

  11. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  12. A proposed Fast algorithm to construct the system matrices for a reduced-order groundwater model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2017-04-01

    Past research has demonstrated that a reduced-order model (ROM) can be two-to-three orders of magnitude smaller than the original model and run considerably faster with acceptable error. A standard method to construct the system matrices for a ROM is Proper Orthogonal Decomposition (POD), which projects the system matrices from the full model space onto a subspace whose range spans the full model space but has a much smaller dimension than the full model space. This projection can be prohibitively expensive to compute if it must be done repeatedly, as with a Monte Carlo simulation. We propose a Fast Algorithm to reduce the computational burden of constructing the system matrices for a parameterized, reduced-order groundwater model (i.e. one whose parameters are represented by zones or interpolation functions). The proposed algorithm decomposes the expensive system matrix projection into a set of simple scalar-matrix multiplications. This allows the algorithm to efficiently construct the system matrices of a POD reduced-order model at a significantly reduced computational cost compared with the standard projection-based method. The developed algorithm is applied to three test cases for demonstration purposes. The first test case is a small, two-dimensional, zoned-parameter, finite-difference model; the second test case is a small, two-dimensional, interpolated-parameter, finite-difference model; and the third test case is a realistically-scaled, two-dimensional, zoned-parameter, finite-element model. In each case, the algorithm is able to accurately and efficiently construct the system matrices of the reduced-order model.

  13. Design of a fast echo matching algorithm to reduce crosstalk with Doppler shifts in ultrasonic ranging

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Guo, Rui; Wu, Jun-an

    2017-02-01

    Crosstalk is a main factor for wrong distance measurement by ultrasonic sensors, and this problem becomes more difficult to deal with under Doppler effects. In this paper, crosstalk reduction with Doppler shifts on small platforms is focused on, and a fast echo matching algorithm (FEMA) is proposed on the basis of chaotic sequences and pulse coding technology, then verified through applying it to match practical echoes. Finally, we introduce how to select both better mapping methods for chaotic sequences, and algorithm parameters for higher achievable maximum of cross-correlation peaks. The results indicate the following: logistic mapping is preferred to generate good chaotic sequences, with high autocorrelation even when the length is very limited; FEMA can not only match echoes and calculate distance accurately with an error degree mostly below 5%, but also generates nearly the same calculation cost level for static or kinematic ranging, much lower than that by direct Doppler compensation (DDC) with the same frequency compensation step; The sensitivity to threshold value selection and performance of FEMA depend significantly on the achievable maximum of cross-correlation peaks, and a higher peak is preferred, which can be considered as a criterion for algorithm parameter optimization under practical conditions.

  14. A fast density-based clustering algorithm for real-time Internet of Things stream.

    PubMed

    Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.

  15. A fast calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations

    NASA Astrophysics Data System (ADS)

    Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.

    2016-05-01

    Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.

  16. A New Fast Algorithm to Completely Account for Non-Lambertian Surface Reflection of The Earth

    NASA Technical Reports Server (NTRS)

    Qin, Wen-Han; Herman, Jay R.; Ahmad, Ziauddin; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Surface bidirectional reflectance distribution function (BRDF) influences not only radiance just about the surface, but that emerging from the top of the atmosphere (TOA). In this study we propose a new, fast and accurate, algorithm CASBIR (correction for anisotropic surface bidirectional reflection) to account for such influences on radiance measured above TOA. This new algorithm is based on a 4-stream theory that separates the radiation field into direct and diffuse components in both upwelling and downwelling directions. This is important because the direct component accounts for a substantial portion of incident radiation under a clear sky, and the BRDF effect is strongest in the reflection of the direct radiation reaching the surface. The model is validated by comparison with a full-scale, vector radiation transfer model for the atmosphere-surface system. The result demonstrates that CASBIR performs very well (with overall relative difference of less than one percent) for all solar and viewing zenith and azimuth angles considered in wavelengths from ultraviolet to near-infrared over three typical, but very different surface types. Application of this algorithm includes both accounting for non-Lambertian surface scattering on the emergent radiation above TOA and a potential approach for surface BRDF retrieval from satellite measured radiance.

  17. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  18. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing

  19. Algorithms for searching Fast radio bursts and pulsars in tight binary systems.

    NASA Astrophysics Data System (ADS)

    Zackay, Barak

    2017-01-01

    Fast radio bursts (FRB's) are an exciting, recently discovered, astrophysical transients which their origins are unknown.Currently, these bursts are believed to be coming from cosmological distances, allowing us to probe the electron content on cosmological length scales. Even though their precise localization is crucial for the determination of their origin, radio interferometers were not extensively employed in searching for them due to computational limitations.I will briefly present the Fast Dispersion Measure Transform (FDMT) algorithm,that allows to reduce the operation count in blind incoherent dedispersion by 2-3 orders of magnitude.In addition, FDMT enables to probe the unexplored domain of sub-microsecond astrophysical pulses.Pulsars in tight binary systems are among the most important astrophysical objects as they provide us our best tests of general relativity in the strong field regime.I will provide a preview to a novel algorithm that enables the detection of pulsars in short binary systems using observation times longer than an orbital period.Current pulsar search programs limit their searches for integration times shorter than a few percents of the orbital period.Until now, searching for pulsars in binary systems using observation times longer than an orbital period was considered impossible as one has to blindly enumerate all options for the Keplerian parameters, the pulsar rotation period, and the unknown DM.Using the current state of the art pulsar search techniques and all computers on the earth, such an enumeration would take longer than a Hubble time. I will demonstrate that using the new algorithm, it is possible to conduct such an enumeration on a laptop using real data of the double pulsar PSR J0737-3039.Among the other applications of this algorithm are:1) Searching for all pulsars on all sky positions in gamma ray observations of the Fermi LAT satellite.2) Blind searching for continuous gravitational wave sources emitted by pulsars with

  20. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  1. Program for the analysis of time series. [by means of fast Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Brown, T. J.; Brown, C. G.; Hardin, J. C.

    1974-01-01

    A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.

  2. Fast conical surface evaluation via randomized algorithm in the null-screen test

    NASA Astrophysics Data System (ADS)

    Aguirre-Aguirre, D.; Díaz-Uribe, R.; Villalobos-Mendoza, B.

    2017-01-01

    This work shows a method to recover the shape of the surface via randomized algorithms when the null-screen test is used, instead of the integration process that is commonly performed. This, because the majority of the errors are added during the reconstruction of the surface (or the integration process). This kind of large surfaces are widely used in the aerospace sector and industry in general, and a big problem exists when these surfaces have to be tested. The null-screen method is a low-cost test, and a complete surface analysis can be done by using this method. In this paper, we show the simulations done for the analysis of fast conic surfaces, where it was proved that the quality and shape of a surface under study can be recovered with a percentage error < 2.

  3. Fast String Search on Multicore Processors: Mapping fundamental algorithms onto parallel hardware

    SciTech Connect

    Scarpazza, Daniele P.; Villa, Oreste; Petrini, Fabrizio

    2008-04-01

    String searching is one of these basic algorithms. It has a host of applications, including search engines, network intrusion detection, virus scanners, spam filters, and DNA analysis, among others. The Cell processor, with its multiple cores, promises to speed-up string searching a lot. In this article, we show how we mapped string searching efficiently on the Cell. We present two implementations: • The fast implementation supports a small dictionary size (approximately 100 patterns) and provides a throughput of 40 Gbps, which is 100 times faster than reference implementations on x86 architectures. • The heavy-duty implementation is slower (3.3-4.3 Gbps), but supports dictionaries with tens of thousands of strings.

  4. Correlated image set compression system based on new fast efficient algorithm of Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-10-01

    The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.

  5. Simulation of Anderson localization in a random fiber using a fast Fresnel diffraction algorithm

    NASA Astrophysics Data System (ADS)

    Davis, Jeffrey A.; Cottrell, Don M.

    2016-06-01

    Anderson localization has been previously demonstrated both theoretically and experimentally for transmission of a Gaussian beam through long distances in an optical fiber consisting of a random array of smaller fibers, each having either a higher or lower refractive index. However, the computational times were extremely long. We show how to simulate these results using a fast Fresnel diffraction algorithm. In each iteration of this approach, the light passes through a phase mask, undergoes Fresnel diffraction over a small distance, and then passes through the same phase mask. We also show results where we use a binary amplitude mask at the input that selectively illuminates either the higher or the lower index fibers. Additionally, we examine imaging of various sized objects through these fibers. In all cases, our results are consistent with other computational methods and experimental results, but with a much reduced computational time.

  6. [Prenatal risk calculation: comparison between Fast Screen pre I plus software and ViewPoint software. Evaluation of the risk calculation algorithms].

    PubMed

    Morin, Jean-François; Botton, Eléonore; Jacquemard, François; Richard-Gireme, Anouk

    2013-01-01

    The Fetal medicine foundation (FMF) has developed a new algorithm called Prenatal Risk Calculation (PRC) to evaluate Down syndrome screening based on free hCGβ, PAPP-A and nuchal translucency. The peculiarity of this algorithm is to use the degree of extremeness (DoE) instead of the multiple of the median (MoM). The biologists measuring maternal seric markers on Kryptor™ machines (Thermo Fisher Scientific) use Fast Screen pre I plus software for the prenatal risk calculation. This software integrates the PRC algorithm. Our study evaluates the data of 2.092 patient files of which 19 show a fœtal abnormality. These files have been first evaluated with the ViewPoint software based on MoM. The link between DoE and MoM has been analyzed and the different calculated risks compared. The study shows that Fast Screen pre I plus software gives the same risk results as ViewPoint software, but yields significantly fewer false positive results.

  7. A multi-threaded mosaicking algorithm for fast image composition of fluorescence bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Bommes, Michael; Stehle, Thomas; Gross, Sebastian; Leonhardt, Steffen; Aach, Til

    2010-02-01

    The treatment of urinary bladder cancer is usually carried out using fluorescence endoscopy. A narrow-band bluish illumination activates a tumor marker resulting in a red fluorescence. Because of low illumination power the distance between endoscope and bladder wall is kept low during the whole bladder scan, which is carried out before treatment. Thus, only a small field of view (FOV) of the operation field is provided, which impedes navigation and relocating of multi-focal tumors. Although off-line calculated panorama images can assist surgery planning, the immediate display of successively growing overview images composed from single video frames in real-time during the bladder scan, is well suited to ease navigation and reduce the risk of missing tumors. Therefore we developed an image mosaicking algorithm for fluorescence endoscopy. Due to fast computation requirements a flexible multi-threaded software architecture based on our RealTimeFrame platform is developed. Different algorithm tasks, like image feature extraction, matching and stitching are separated and applied by independent processing threads. Thus, different implementation of single tasks can be easily evaluated. In an optimization step we evaluate the trade-off between feature repeatability and total processing time, consider the thread synchronization, and achieve a constant workload of each thread. Thus, a fast computation of panoramic images is performed on a standard hardware platform, preserving full input image resolution (780x576) at the same time. Displayed on a second clinical monitor, the extended FOV of the image composition promises high potential for surgery assistance.

  8. Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Park, Byung-Kwan; Kim, Sung-Su; Chung, Dae-Su; Lee, Seong-Deok; Kim, Chang-Yeong

    2008-02-01

    This paper describes the new method for fast auto focusing in image capturing devices. This is achieved by using two defocused images. At two prefixed lens positions, two defocused images are taken and defocused blur levels in each image are estimated using Discrete Cosine Transform (DCT). These DCT values can be classified into distance from the image capturing device to main object, so we can make distance vs. defocused blur level classifier. With this classifier, relation between two defocused blur levels can give the device the best focused lens step. In the case of ordinary auto focusing like Depth from Focus (DFF), it needs several defocused images and compares high frequency components in each image. Also known as hill-climbing method, the process requires about half number of images in all focus lens steps for focusing in general. Since this new method requires only two defocused images, it can save lots of time for focusing or reduce shutter lag time. Compared to existing Depth from Defocus (DFD) which uses two defocused images, this new algorithm is simple and accurate as DFF method. Because of this simplicity and accuracy, this method can also be applied to fast 3D depth map construction.

  9. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  10. A fast image encryption algorithm based on only blocks in cipher text

    NASA Astrophysics Data System (ADS)

    Wang, Xing-Yuan; Wang, Qian

    2014-03-01

    In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks.

  11. Fast Detection Anti-Collision Algorithm for RFID System Implemented On-Chip

    NASA Astrophysics Data System (ADS)

    Sampe, Jahariah; Othman, Masuri

    This study presents a proposed Fast Detection Anti-Collision Algorithm (FDACA) for Radio Frequency Identification (RFID) system. Our proposed FDACA is implemented on-chip using Application Specific Integrated Circuit (ASIC) technology and the algorithm is based on the deterministic anti-collision technique. The FDACA is novel in terms of a faster identification by reducing the number of iterations during the identification process. The primary FDACA also reads the identification (ID) bits at once regardless of its length. It also does not require the tags to remember the instructions from the reader during the communication process in which the tags are treated as address carrying devices only. As a result simple, small, low cost and memoryless tags can be produced. The proposed system is designed using Verilog HDL. The system is simulated using Modelsim XE II and synthesized using Xilinx Synthesis Technology (XST). The system is implemented in hardware using Field Programmable Grid Array (FPGA) board for real time verification. From the verification results it can be shown that the FDACA system enables to identify the tags without error until the operating frequency of 180 MHZ. Finally the FDACA system is implemented on chip using 0.18 μm Library, Synopsys Compiler and tools. From the resynthesis results it shows that the identification rate of the proposed FDACA system is 333 Mega tags per second with the power requirement of 3.451 mW.

  12. A fast pyramid matching algorithm for infrared object detection based on region covariance descriptor

    NASA Astrophysics Data System (ADS)

    Yin, Li-hua; Wang, Xiao; Xie, Jiang-rong

    2016-11-01

    In order to achieve the purpose of infrared object detection, two phases are essential, including feature selection and matching strategy. Good Features should be discriminative, robust and easy to compute. The matching strategy affects the accuracy and efficiency of matching. In the first stage, instead of the joint distribution of the image statistics, we use region covariance descriptor and calculate region covariance using integral images. The idea presented here is more general than the image sums or histograms, which were already published before. In the second feature matching stage, we describe a new and fast pyramid matching algorithm under the distance metric, which performed extremely rapidly than a brute force search. We represent an object with five covariance matrices of the image features computed inside the object region. Instead of brute force matching, we constructed the image pyramid and decomposed the source image and object image into several levels, which included different image resolutions. After the completion of coarse match, fine-match is essential. The performance of region covariance descriptor is superior to other methods, and the pyramid matching algorithm performs extremely rapidly and accurately, as it is shown, and the large rotations and illumination changes are also absorbed by the covariance matrix.

  13. Load Balancing and Data Locality in the Parallelization of the Fast Multipole Algorithm

    NASA Astrophysics Data System (ADS)

    Banicescu, Ioana

    Scientific problems are often irregular, large and computationally intensive. Efficient parallel implementations of algorithms that are employed in finding solutions to these problems play an important role in the development of science. This thesis studies the parallelization of a certain class of irregular scientific problems, the N -body problem, using a classical hierarchical algorithm: the Fast Multipole Algorithm (FMA). Hierarchical N-body algorithms in general, and the FMA in particular, are amenable to parallel execution. However, performance gains are difficult to obtain, due to load imbalances that are primarily caused by the irregular distribution of bodies and of computation domains. Understanding application characteristics is essential for obtaining high performance implementations on parallel machines. After surveying the available parallelism in the FMA, we address the problem of exploiting this parallelism with partitioning and scheduling techniques that optimally map it onto a parallel machine, the KSR1. The KSR1 is a parallel shared address-space machine with a hierarchical cache-only architecture. The tension between maintaining data locality and balancing processor loads requires a scheduling scheme that combines static techniques (that exploit data locality) with dynamic techniques (that improve load balancing). An effective combined scheduling scheme that balances processor loads and maintains locality, by exploiting self-similarity properties of fractals, is Fractiling. Fractiling is based on a probabilistic analysis. It thus accommodates load imbalances caused by predictable events (such as irregular data) as well as unpredictable events (such as data access latency). Fractiling adapts to algorithmic and system induced load imbalances while maximizing data locality. We used Fractiling to schedule a parallel FMA on the KSR1. Our parallel 2-d and 3-d FMA implementations were run using uniform and nonuniform data set distributions under a

  14. A fast algorithm based on the domain decomposition method for scattering analysis of electrically large objects

    NASA Astrophysics Data System (ADS)

    Yin, Lei; Hong, Wei

    2002-01-01

    By combining the finite difference (FD) method with the domain decomposition method (DDM), a fast and rigorous algorithm is presented in this paper for the scattering analysis of extremely large objects. Unlike conventional methods, such as the method of moments (MOM) and FD method, etc., the new algorithm decomposes an original large domain into small subdomains and chooses the most efficient method to solve the electromagnetic (EM) equations on each subdomain individually. Therefore the computational complexity and scale are substantially reduced. The iterative procedure of the algorithm and the implementation of virtual boundary conditions are discussed in detail. During scattering analysis of an electrically large cylinder, the conformal band computational domain along the circumference of the cylinder is decomposed into sections, which results in a series of band matrices with very narrow band. Compared with the traditional FD method, it decreases the consumption of computer memory and CPU time from O(N2) to O(N/m) and O(N), respectively, where m is the number of subdomains and Nis the number of nodes or unknowns. Furthermore, this method can be easily applied for the analysis of arbitrary shaped cylinders because the subdomains can be divided into any possible form. On the other hand, increasing the number of subdomains will hardly increase the computing time, which makes it possible to analyze the EM scattering problems of extremely large cylinders only on a PC. The EM scattering by two-dimensional cylinders with maximum perimeter of 100,000 wavelengths is analyzed. Moreover, this method is very suitable for parallel computation, which can further promote the computational efficiency.

  15. A fast algorithm for voxel-based deterministic simulation of X-ray imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2008-04-01

    Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video

  16. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  17. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    PubMed

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  18. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  19. A Fast parallel tridiagonal algorithm for a class of CFD applications

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Sun, Xian-He

    1996-01-01

    The parallel diagonal dominant (PDD) algorithm is an efficient tridiagonal solver. This paper presents for study a variation of the PDD algorithm, the reduced PDD algorithm. The new algorithm maintains the minimum communication provided by the PDD algorithm, but has a reduced operation count. The PDD algorithm also has a smaller operation count than the conventional sequential algorithm for many applications. Accuracy analysis is provided for the reduced PDD algorithm for symmetric Toeplitz tridiagonal (STT) systems. Implementation results on Langley's Intel Paragon and IBM SP2 show that both the PDD and reduced PDD algorithms are efficient and scalable.

  20. A fast video clip retrieval algorithm based on VA-file

    NASA Astrophysics Data System (ADS)

    Liu, Fangjie; Dong, DaoGuo; Miao, Xiaoping; Xue, XiangYang

    2003-12-01

    Video clip retrieval is a significant research topic of content-base multimedia retrieval. Generally, video clip retrieval process is carried out as following: (1) segment a video clip into shots; (2) extract a key frame from each shot as its representative; (3) denote every key frame as a feature vector, and thus a video clip can be denoted as a sequence of feature vectors; (4) retrieve match clip by computing the similarity between the feature vector sequence of a query clip and the feature vector sequence of any clip in database. To carry out fast video clip retrieval the index structure is indispensable. According to our literature survey, S2-tree [17] is the one and only index structure having been applied to support video clip retrieval, which combines the characteristics of both X-tree and Suffix-tree and converts the series vectors retrieval to string matching. But S2-tree structure will not be applicable if the feature vector's dimension is beyond 20, because the X-tree itself cannot be used to sustain similarity query effectively when dimensions of vectors are beyond 20. Furthermore, it cannot support flexible similarity definitions between two vector sequences. VA-file represents the vector approximately by compressing the original data and it maintains the original order when representing vectors in a sequence, which is a very valuable merit for vector sequences matching. In this paper, a new video clip similarity model as well as video clip retrieval algorithm based on VA-File are proposed. The experiments show that our algorithm incredibly shortened the retrieval time compared to sequential scanning without index structure.

  1. An algorithm for computing the 2D structure of fast rotating stars

    SciTech Connect

    Rieutord, Michel; Espinosa Lara, Francisco; Putigny, Bertrand

    2016-08-01

    Stars may be understood as self-gravitating masses of a compressible fluid whose radiative cooling is compensated by nuclear reactions or gravitational contraction. The understanding of their time evolution requires the use of detailed models that account for a complex microphysics including that of opacities, equation of state and nuclear reactions. The present stellar models are essentially one-dimensional, namely spherically symmetric. However, the interpretation of recent data like the surface abundances of elements or the distribution of internal rotation have reached the limits of validity of one-dimensional models because of their very simplified representation of large-scale fluid flows. In this article, we describe the ESTER code, which is the first code able to compute in a consistent way a two-dimensional model of a fast rotating star including its large-scale flows. Compared to classical 1D stellar evolution codes, many numerical innovations have been introduced to deal with this complex problem. First, the spectral discretization based on spherical harmonics and Chebyshev polynomials is used to represent the 2D axisymmetric fields. A nonlinear mapping maps the spheroidal star and allows a smooth spectral representation of the fields. The properties of Picard and Newton iterations for solving the nonlinear partial differential equations of the problem are discussed. It turns out that the Picard scheme is efficient on the computation of the simple polytropic stars, but Newton algorithm is unsurpassed when stellar models include complex microphysics. Finally, we discuss the numerical efficiency of our solver of Newton iterations. This linear solver combines the iterative Conjugate Gradient Squared algorithm together with an LU-factorization serving as a preconditioner of the Jacobian matrix.

  2. Robust digital image-in-image watermarking algorithm using the fast Hadamard transform

    NASA Astrophysics Data System (ADS)

    Ho, Anthony T. S.; Shen, Jun; Tan, Soon H.

    2003-01-01

    In this paper, we propose a robust image-in-image watermarking algorithm based on the fast Hadamard transform (FHT) for the copyright protection of digital images. Most current research makes use of a normally distributed random vector as a watermark and where the watermark can only be detected by cross-correlating the received coefficients with the watermark generated by secret key and then comparing an experimental threshold value. However, the FHT image-in-image method involves a "blind" watermarking process that retrieves the watermark without the need for an original image present. In the proposed approach, a number of pseudorandom selected 8×8 sub-blocks of original image and a watermark image are decomposed into Hadamard coefficients. To increase the invisibility of the watermark, a visual model based on original image characteristics, such as edges and textures are incorporated to determine the watermarking strength factor. All the AC Hadamard coefficients of watermark image is scaled by the watermarking strength factor and inserted into several middle and high frequency AC components of the Hadamard coefficients from the sub-blocks of original image. To further increase the reliability of the watermarking against the common geometric distortions, such as rotation and scaling, a post-processing technique is proposed. Understanding the type of distortion provides a mean to apply a reversal of the attack on the watermarked image, enabling the restoration to the synchronization of the embedding positions. The performance of the proposed algorithm is evaluated using Stirmark. The experiment uses container image of size 512×512×8bits and the watermark image of size 64×64×8bits. It survives about 60% of all Stirmark attacks. The simplicity of Hadamard transform offers a significant advantage in shorter processing time and ease of hardware implementation than the commonly used DCT and DWT techniques.

  3. A Fast, Locally Adaptive, Interactive Retrieval Algorithm for the Analysis of DIAL Measurements

    NASA Astrophysics Data System (ADS)

    Samarov, D. V.; Rogers, R.; Hair, J. W.; Douglass, K. O.; Plusquellic, D.

    2010-12-01

    Differential absorption light detection and ranging (DIAL) is a laser-based tool which is used for remote, range-resolved measurement of particular gases in the atmosphere, such as carbon-dioxide and methane. In many instances it is of interest to study how these gases are distributed over a region such as a landfill, factory, or farm. While a single DIAL measurement only tells us about the distribution of a gas along a single path, a sequence of consecutive measurements provides us with information on how that gas is distributed over a region, making DIAL a natural choice for such studies. DIAL measurements present a number of interesting challenges; first, in order to convert the raw data to concentration it is necessary to estimate the derivative along the path of the measurement. Second, as the distribution of gases across a region can be highly heterogeneous it is important that the spatial nature of the measurements be taken into account. Finally, since it is common for the set of collected measurements to be quite large it is important for the method to be computationally efficient. Existing work based on Local Polynomial Regression (LPR) has been developed which addresses the first two issues, but the issue of computational speed remains an open problem. In addition to the latter, another desirable property is to allow user input into the algorithm. In this talk we present a novel method based on LPR which utilizes a variant of the RODEO algorithm to provide a fast, locally adaptive and interactive approach to the analysis of DIAL measurements. This methodology is motivated by and applied to several simulated examples and a study out of NASA Langley Research Center (LaRC) looking at the estimation of aerosol extinction in the atmosphere. A comparison study of our method against several other algorithms is also presented. References Chaudhuri, P., Marron, J.S., Scale-space view of curve estimation, Annals of Statistics 28 (2000) 408-428. Duong, T., Cowling

  4. HaploGrep: a fast and reliable algorithm for automatic classification of mitochondrial DNA haplogroups.

    PubMed

    Kloss-Brandstätter, Anita; Pacher, Dominic; Schönherr, Sebastian; Weissensteiner, Hansi; Binna, Robert; Specht, Günther; Kronenberg, Florian

    2011-01-01

    An ongoing source of controversy in mitochondrial DNA (mtDNA) research is based on the detection of numerous errors in mtDNA profiles that led to erroneous conclusions and false disease associations. Most of these controversies could be avoided if the samples' haplogroup status would be taken into consideration. Knowing the mtDNA haplogroup affiliation is a critical prerequisite for studying mechanisms of human evolution and discovering genes involved in complex diseases, and validating phylogenetic consistency using haplogroup classification is an important step in quality control. However, despite the availability of Phylotree, a regularly updated classification tree of global mtDNA variation, the process of haplogroup classification is still time-consuming and error-prone, as researchers have to manually compare the polymorphisms found in a population sample to those summarized in Phylotree, polymorphism by polymorphism, sample by sample. We present HaploGrep, a fast, reliable and straight-forward algorithm implemented in a Web application to determine the haplogroup affiliation of thousands of mtDNA profiles genotyped for the entire mtDNA or any part of it. HaploGrep uses the latest version of Phylotree and offers an all-in-one solution for quality assessment of mtDNA profiles in clinical genetics, population genetics and forensics. HaploGrep can be accessed freely at http://haplogrep.uibk.ac.at.

  5. EEG-based classification of fast and slow hand movements using Wavelet-CSP algorithm.

    PubMed

    Robinson, Neethu; Vinod, A P; Ang, Kai Keng; Tee, Keng Peng; Guan, Cuntai T

    2013-08-01

    A brain-computer interface (BCI) acquires brain signals, extracts informative features, and translates these features to commands to control an external device. This paper investigates the application of a noninvasive electroencephalography (EEG)-based BCI to identify brain signal features in regard to actual hand movement speed. This provides a more refined control for a BCI system in terms of movement parameters. An experiment was performed to collect EEG data from subjects while they performed right-hand movement at two different speeds, namely fast and slow, in four different directions. The informative features from the data were obtained using the Wavelet-Common Spatial Pattern (W-CSP) algorithm that provided high-temporal-spatial-spectral resolution. The applicability of these features to classify the two speeds and to reconstruct the speed profile was studied. The results for classifying speed across seven subjects yielded a mean accuracy of 83.71% using a Fisher Linear Discriminant (FLD) classifier. The speed components were reconstructed using multiple linear regression and significant correlation of 0.52 (Pearson's linear correlation coefficient) was obtained between recorded and reconstructed velocities on an average. The spatial patterns of the W-CSP features obtained showed activations in parietal and motor areas of the brain. The results achieved promises to provide a more refined control in BCI by including control of movement speed.

  6. A fast forward algorithm for real-time geosteering of azimuthal gamma-ray logging.

    PubMed

    Qin, Zhen; Pan, Heping; Wang, Zhonghao; Wang, Bintao; Huang, Ke; Liu, Shaohua; Li, Gang; Amara Konaté, Ahmed; Fang, Sinan

    2017-05-01

    Geosteering is an effective method to increase the reservoir drilling rate in horizontal wells. Based on the features of an azimuthal gamma-ray logging tool and strata spatial location, a fast forward calculation method of azimuthal gamma-ray logging is deduced by using the natural gamma ray distribution equation in formation. The response characteristics of azimuthal gamma-ray logging while drilling in the layered formation models with different thickness and position are simulated and summarized by using the method. The result indicates that the method calculates quickly, and when the tool nears a boundary, the method can be used to identify the boundary and determine the distance from the logging tool to the boundary in time. Additionally, the formation parameters of the algorithm in the field can be determined after a simple method is proposed based on the information of an offset well. Therefore, the forward method can be used for geosteering in the field. A field example validates that the forward method can be used to determine the distance from the azimuthal gamma-ray logging tool to the boundary for geosteering in real-time.

  7. Fast algorithm for the reconciliation of gene trees and LGT networks.

    PubMed

    Scornavacca, Celine; Mayol, Joan Carles Pons; Cardona, Gabriel

    2017-04-07

    In phylogenomics, reconciliations aim at explaining the discrepancies between the evolutionary histories of genes and species. Several reconciliation models are available when the evolution of the species of interest is modelled via phylogenetic trees; the most commonly used are the DL model, accounting for duplications and losses in gene evolution and yielding polynomially-solvable problems, and the DTL model, which also accounts for gene transfers and implies NP-hard problems. However, when dealing with non-tree-like evolutionary events such as hybridisations, phylogenetic networks - and not phylogenetic trees - should be used to model species evolution. Reconciliation models involving phylogenetic networks are still at their early days. In this paper, we propose a new reconciliation model in which the evolution of species is modelled by a special kind of phylogenetic networks - the LGT networks. Our model considers duplications, losses and transfers of genes, but restricts transfers to happen through some specific arcs of the network, called secondary arcs. Moreover, we provide a polynomial algorithm to compute the most parsimonious reconciliation between a gene tree and an LGT network under this model. Our method, when combined with quartet decomposition methods to detect putative "highways" of transfers, permits to refine their analyses by allowing to examine the two possible directions of a highway and even consider combinations of highways.

  8. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  9. A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie

    2017-02-01

    One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.

  10. A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks.

    PubMed

    Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie

    2017-02-27

    One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.

  11. A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks

    PubMed Central

    Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie

    2017-01-01

    One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model. PMID:28240238

  12. Multilevel fast multipole algorithm for elastic wave scattering by large three-dimensional objects

    NASA Astrophysics Data System (ADS)

    Tong, Mei Song; Chew, Weng Cho

    2009-02-01

    Multilevel fast multipole algorithm (MLFMA) is developed for solving elastic wave scattering by large three-dimensional (3D) objects. Since the governing set of boundary integral equations (BIE) for the problem includes both compressional and shear waves with different wave numbers in one medium, the double-tree structure for each medium is used in the MLFMA implementation. When both the object and surrounding media are elastic, four wave numbers in total and thus four FMA trees are involved. We employ Nyström method to discretize the BIE and generate the corresponding matrix equation. The MLFMA is used to accelerate the solution process by reducing the complexity of matrix-vector product from O(N2) to O(NlogN) in iterative solvers. The multiple-tree structure differs from the single-tree frame in electromagnetics (EM) and acoustics, and greatly complicates the MLFMA implementation due to the different definitions for well-separated groups in different FMA trees. Our Nyström method has made use of the cancellation of leading terms in the series expansion of integral kernels to handle hyper singularities in near terms. This feature is kept in the MLFMA by seeking the common near patches in different FMA trees and treating the involved near terms synergistically. Due to the high cost of the multiple-tree structure, our numerical examples show that we can only solve the elastic wave scattering problems with 0.3-0.4 millions of unknowns on our Dell Precision 690 workstation using one core.

  13. A fast color image enhancement algorithm based on Max Intensity Channel.

    PubMed

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-30

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  14. On the applicability of genetic algorithms to fast solar spectropolarimetric inversions for vector magnetography

    NASA Astrophysics Data System (ADS)

    Harker, Brian J.

    The measurement of vector magnetic fields on the sun is one of the most important diagnostic tools for characterizing solar activity. The ubiquitous solar wind is guided into interplanetary space by open magnetic field lines in the upper solar atmosphere. Highly-energetic solar flares and Coronal Mass Ejections (CMEs) are triggered in lower layers of the solar atmosphere by the driving forces at the visible "surface" of the sun, the photosphere. The driving forces there tangle and interweave the vector magnetic fields, ultimately leading to an unstable field topology with large excess magnetic energy, and this excess energy is suddenly and violently released by magnetic reconnection, emitting intense broadband radiation that spans the electromagnetic spectrum, accelerating billions of metric tons of plasma away from the sun, and finally relaxing the magnetic field to lower-energy states. These eruptive flaring events can have severe impacts on the near-Earth environment and the human technology that inhabits it. This dissertation presents a novel inversion method for inferring the properties of the vector magnetic field from telescopic measurements of the polarization states (Stokes vector) of the light received from the sun, in an effort to develop a method that is fast, accurate, and reliable. One of the long-term goals of this work is to develop such a method that is capable of rapidly-producing characterizations of the magnetic field from time-sequential data, such that near real-time projections of the complexity and flare- productivity of solar active regions can be made. This will be a boon to the field of solar flare forecasting, and should help mitigate the harmful effects of space weather on mankind's space-based endeavors. To this end, I have developed an inversion method based on genetic algorithms (GA) that have the potential for achieving such high-speed analysis.

  15. Automatic mapping of visual cortex receptive fields: a fast and precise algorithm.

    PubMed

    Fiorani, Mario; Azzi, João C B; Soares, Juliana G M; Gattass, Ricardo

    2014-01-15

    An important issue for neurophysiological studies of the visual system is the definition of the region of the visual field that can modify a neuron's activity (i.e., the neuron's receptive field - RF). Usually a trade-off exists between precision and the time required to map a RF. Manual methods (qualitative) are fast but impose a variable degree of imprecision, while quantitative methods are more precise but usually require more time. We describe a rapid quantitative method for mapping visual RFs that is derived from computerized tomography and named back-projection. This method finds the intersection of responsive regions of the visual field based on spike density functions that are generated over time in response to long bars moving in different directions. An algorithm corrects the response profiles for latencies and allows for the conversion of the time domain into a 2D-space domain. The final product is an RF map that shows the distribution of the neuronal activity in visual-spatial coordinates. In addition to mapping the RF, this method also provides functional properties, such as latency, orientation and direction preference indexes. This method exhibits the following beneficial properties: (a) speed; (b) ease of implementation; (c) precise RF localization; (d) sensitivity (this method can map RFs based on few responses); (e) reliability (this method provides consistent information about RF shapes and sizes, which will allow for comparative studies); (f) comprehensiveness (this method can scan for RFs over an extensive area of the visual field); (g) informativeness (it provides functional quantitative data about the RF); and (h) usefulness (this method can map RFs in regions without direct retinal inputs, such as the cortical representations of the optic disc and of retinal lesions, which should allow for studies of functional connectivity, reorganization and neural plasticity). Furthermore, our method allows for precise mapping of RFs in a 30° by 30

  16. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations.

    PubMed

    Nukala, Phani K V V; Kent, P R C

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N(2)) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N(2)) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN(2)) work and O(MN(2)) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  17. A Fast Superpixel Segmentation Algorithm for PolSAR Images Based on Edge Refinement and Revised Wishart Distance

    PubMed Central

    Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385

  18. Generation of SNP datasets for orangutan population genomics using improved reduced-representation sequencing and direct comparisons of SNP calling algorithms

    PubMed Central

    2014-01-01

    Background High-throughput sequencing has opened up exciting possibilities in population and conservation genetics by enabling the assessment of genetic variation at genome-wide scales. One approach to reduce genome complexity, i.e. investigating only parts of the genome, is reduced-representation library (RRL) sequencing. Like similar approaches, RRL sequencing reduces ascertainment bias due to simultaneous discovery and genotyping of single-nucleotide polymorphisms (SNPs) and does not require reference genomes. Yet, generating such datasets remains challenging due to laboratory and bioinformatical issues. In the laboratory, current protocols require improvements with regards to sequencing homologous fragments to reduce the number of missing genotypes. From the bioinformatical perspective, the reliance of most studies on a single SNP caller disregards the possibility that different algorithms may produce disparate SNP datasets. Results We present an improved RRL (iRRL) protocol that maximizes the generation of homologous DNA sequences, thus achieving improved genotyping-by-sequencing efficiency. Our modifications facilitate generation of single-sample libraries, enabling individual genotype assignments instead of pooled-sample analysis. We sequenced ~1% of the orangutan genome with 41-fold median coverage in 31 wild-born individuals from two populations. SNPs and genotypes were called using three different algorithms. We obtained substantially different SNP datasets depending on the SNP caller. Genotype validations revealed that the Unified Genotyper of the Genome Analysis Toolkit and SAMtools performed significantly better than a caller from CLC Genomics Workbench (CLC). Of all conflicting genotype calls, CLC was only correct in 17% of the cases. Furthermore, conflicting genotypes between two algorithms showed a systematic bias in that one caller almost exclusively assigned heterozygotes, while the other one almost exclusively assigned homozygotes. Conclusions

  19. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  20. A novel algorithm for calling mRNA m6A peaks by modeling biological variances in MeRIP-seq data

    PubMed Central

    Cui, Xiaodong; Meng, Jia; Zhang, Shaowu; Chen, Yidong; Huang, Yufei

    2016-01-01

    Motivation: N6-methyl-adenosine (m6A) is the most prevalent mRNA methylation but precise prediction of its mRNA location is important for understanding its function. A recent sequencing technology, known as Methylated RNA Immunoprecipitation Sequencing technology (MeRIP-seq), has been developed for transcriptome-wide profiling of m6A. We previously developed a peak calling algorithm called exomePeak. However, exomePeak over-simplifies data characteristics and ignores the reads’ variances among replicates or reads dependency across a site region. To further improve the performance, new model is needed to address these important issues of MeRIP-seq data. Results: We propose a novel, graphical model-based peak calling method, MeTPeak, for transcriptome-wide detection of m6A sites from MeRIP-seq data. MeTPeak explicitly models read count of an m6A site and introduces a hierarchical layer of Beta variables to capture the variances and a Hidden Markov model to characterize the reads dependency across a site. In addition, we developed a constrained Newton’s method and designed a log-barrier function to compute analytically intractable, positively constrained Beta parameters. We applied our algorithm to simulated and real biological datasets and demonstrated significant improvement in detection performance and robustness over exomePeak. Prediction results on publicly available MeRIP-seq datasets are also validated and shown to be able to recapitulate the known patterns of m6A, further validating the improved performance of MeTPeak. Availability and implementation: The package ‘MeTPeak’ is implemented in R and C ++, and additional details are available at https://github.com/compgenomics/MeTPeak Contact: yufei.huang@utsa.edu or xdchoi@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307641

  1. A Fast Multi-Object Extraction Algorithm Based on Cell-Based Connected Components Labeling

    NASA Astrophysics Data System (ADS)

    Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku

    We describe a cell-based connected component labeling algorithm to calculate the 0th and 1st moment features as the attributes for labeled regions. These can be used to indicate their sizes and positions for multi-object extraction. Based on the additivity in moment features, the cell-based labeling algorithm can label divided cells of a certain size in an image by scanning the image only once to obtain the moment features of the labeled regions with remarkably reduced computational complexity and memory consumption for labeling. Our algorithm is a simple-one-time-scan cell-based labeling algorithm, which is suitable for hardware and parallel implementation. We also compared it with conventional labeling algorithms. The experimental results showed that our algorithm is faster than conventional raster-scan labeling algorithms.

  2. A fast partitioning algorithm using adaptive Mahalanobis clustering with application to seismic zoning

    NASA Astrophysics Data System (ADS)

    Morales-Esteban, Antonio; Martínez-Álvarez, Francisco; Scitovski, Sanja; Scitovski, Rudolf

    2014-12-01

    In this paper we construct an efficient adaptive Mahalanobis k-means algorithm. In addition, we propose a new efficient algorithm to search for a globally optimal partition obtained by using the adoptive Mahalanobis distance-like function. The algorithm is a generalization of the previously proposed incremental algorithm (Scitovski and Scitovski, 2013). It successively finds optimal partitions with k = 2 , 3 , … clusters. Therefore, it can also be used for the estimation of the most appropriate number of clusters in a partition by using various validity indexes. The algorithm has been applied to the seismic catalogues of Croatia and the Iberian Peninsula. Both regions are characterized by a moderate seismic activity. One of the main advantages of the algorithm is its ability to discover not only circular but also elliptical shapes, whose geometry fits the faults better. Three seismogenic zonings are proposed for Croatia and two for the Iberian Peninsula and adjacent areas, according to the clusters discovered by the algorithm.

  3. Global convergence analysis of fast multiobjective gradient-based dose optimization algorithms for high-dose-rate brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S

    2003-03-07

    We consider the problem of the global convergence of gradient-based optimization algorithms for interstitial high-dose-rate (HDR) brachytherapy dose optimization using variance-based objectives. Possible local minima could lead to only sub-optimal solutions. We perform a configuration space analysis using a representative set of the entire non-dominated solution space. A set of three prostate implants is used in this study. We compare the results obtained by conjugate gradient algorithms, two variable metric algorithms and fast-simulated annealing. For the variable metric algorithm BFGS from numerical recipes, large fluctuations are observed. The limited memory L-BFGS algorithm and the conjugate gradient algorithm FRPR are globally convergent. Local minima or degenerate states are not observed. We study the possibility of obtaining a representative set of non-dominated solutions using optimal solution rearrangement and a warm start mechanism. For the surface and volume dose variance and their derivatives, a method is proposed which significantly reduces the number of required operations. The optimization time, ignoring a preprocessing step, is independent of the number of sampling points in the planning target volume. Multiobjective dose optimization in HDR brachytherapy using L-BFGS and a new modified computation method for the objectives and derivatives has been accelerated, depending on the number of sampling points, by a factor in the range 10-100.

  4. Improved quantitative phase imaging in lensless microscopy by single-shot multi-wavelength illumination using a fast convergence algorithm.

    PubMed

    Sanz, Martín; Picazo-Bueno, José Angel; García, Javier; Micó, Vicente

    2015-08-10

    We report on a novel algorithm for high-resolution quantitative phase imaging in a new concept of lensless holographic microscope based on single-shot multi-wavelength illumination. This new microscope layout, reported by Noom et al. along the past year and named by us as MISHELF (initials incoming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy, rises from the simultaneous illumination and recording of multiple diffraction patterns in the Fresnel domain. In combination with a novel and fast iterative phase retrieval algorithm, MISHELF microscopy is capable of high-resolution (micron range) phase-retrieved (twin image elimination) biological imaging of dynamic events. In this contribution, MISHELF microscopy is demonstrated through qualitative concept description, algorithm implementation, and experimental validation using both a synthetic object (resolution test target) and a biological sample (swine sperm sample) for the case of three (RGB) illumination wavelengths. The proposed method becomes in an alternative instrument improving the capabilities of existing lensless microscopes.

  5. The Wavenumber Algorithm: Fast Fourier-Domain Imaging Using Full Matrix Capture

    NASA Astrophysics Data System (ADS)

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2009-03-01

    We develop a Fourier-domain approach to full matrix imaging based on the wavenumber algorithm used in synthetic aperture radar and sonar. The extension to the wavenumber algorithm for full matrix capture is described and the performance of the new algorithm is compared to the total focusing method (TFM), which we use as a representative benchmark for the time-domain algorithms. The wavenumber algorithm provides a mathematically rigorous solution to the inverse problem for the assumed forward wave propagation model, whereas the TFM employs heuristic delay-and-sum beamforming. Consequently, the wavenumber algorithm has an improved point-spread function and provides better imagery. However, the major advantage of the wavenumber algorithm is its superior computational performance. For large arrays and images, the wavenumber algorithm is several orders of magnitude faster than the TFM. On the other hand, the key advantage of the TFM is its flexibility. The wavenumber algorithm requires a regularly sampled linear array, while the TFM can handle arbitrary imaging geometries. The TFM and the wavenumber algorithm are compared using simulated and experimental data.

  6. Fast voxel and polygon ray-tracing algorithms in intensity modulated radiation therapy treatment planning.

    PubMed

    Fox, Christopher; Romeijn, H Edwin; Dempsey, James F

    2006-05-01

    We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique.

  7. Fast Algorithm and Application of Wavelet Multiple-scale Edge Detection Filter

    NASA Astrophysics Data System (ADS)

    Liang, Likai; Yang, Min; Tong, Qiang; Zhang, Yue

    This paper focuses on the algorithm theory of the two-dimensional wavelet transform which is used for image edge detection. To simplify the algorithm, the author propounds to turn the two-dimensional dyadic wavelet to one dimensional dyadic wavelet that can be divided into product. We can use the filter to achieve the wavelet multiple scale edge detection quickly. Simultaneously, the process that the wavelet transform used for the multiple-scale edge detection is discussed in detail. Finally, the algorithm can be applied to vehicle license image detection and. Compared with the results of the Sobel, Canny and the others, this algorithm shows great feasibility and the effectiveness.

  8. A fast, uncoupled, compressible, two-dimensional, unsteady boundary layer algorithm with separation for engine inlets

    NASA Technical Reports Server (NTRS)

    Roach, Robert L.; Nelson, Chris; Sakowski, Barbara; Darling, Douglas; Vandewall, Allan G.

    1992-01-01

    A finite difference boundary layer algorithm was developed to model viscous effects when an inviscid core flow solution is given. This algorithm solved each boundary layer equation separately, then iterated to find a solution. Solving the boundary layer equations sequentially was 2.4 to 4.0 times faster than solving the boundary layer equations simultaneously. This algorithm used a modified Baldwin-Lomax turbulence model, a weighted average of forward and backward differencing of the pressure gradient, and a backward sweep of the pressure. With these modifications, the boundary layer algorithm was able to model flows with and without separation. The number of grid points used in the boundary layer algorithm affected the stability of the algorithm as well as the accuracy of the predictions of friction coefficients and momentum thicknesses. Results of this boundary layer algorithm compared well with experimental observations of friction coefficients and momentum thicknesses. In addition, when used interactively with an inviscid flow algorithm, this boundary layer algorithm corrected for viscous effects to give a good match with experimental observations for pressures in a supersonic inlet.

  9. Fast algorithm for optimal graph-Laplacian based 3D image segmentation

    NASA Astrophysics Data System (ADS)

    Harizanov, S.; Georgiev, I.

    2016-10-01

    In this paper we propose an iterative steepest-descent-type algorithm that is observed to converge towards the exact solution of the ℓ0 discrete optimization problem, related to graph-Laplacian based image segmentation. Such an algorithm allows for significant additional improvements on the segmentation quality once the minimizer of the associated relaxed ℓ1 continuous optimization problem is computed, unlike the standard strategy of simply hard-thresholding the latter. Convergence analysis of the algorithm is not a subject of this work. Instead, various numerical experiments, confirming the practical value of the algorithm, are documented.

  10. A fast algorithm for solving a linear feasibility problem with application to Intensity-Modulated Radiation Therapy.

    PubMed

    Herman, Gabor T; Chen, Wei

    2008-03-01

    The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.

  11. Effective Analysis of NGS Metagenomic Data with Ultra-Fast Clustering Algorithms (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Li, Weizhong [San Diego Supercomputer Center

    2016-07-12

    San Diego Supercomputer Center's Weizhong Li on "Effective Analysis of NGS Metagenomic Data with Ultra-fast Clustering Algorithms" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  12. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  13. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    PubMed Central

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  14. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality

    PubMed Central

    Wang, Xueyi

    2011-01-01

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 106 records and 104 dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces. PMID:22247818

  15. A Fast Algorithm for Denoising Magnitude Diffusion-Weighted Images with Rank and Edge Constraints

    PubMed Central

    Lam, Fan; Liu, Ding; Song, Zhuang; Schuff, Norbert; Liang, Zhi-Pei

    2015-01-01

    Purpose To accelerate denoising of magnitude diffusion-weighted images subject to joint rank and edge constraints. Methods We extend a previously proposed majorize-minimize (MM) method for statistical estimation that involves noncentral χ distributions and joint rank and edge constraints. A new algorithm is derived which decomposes the constrained noncentral χ denoising problem into a series of constrained Gaussian denoising problems each of which is then solved using an efficient alternating minimization scheme. Results The performance of the proposed algorithm has been evaluated using both simulated and experimental data. Results from simulations based on ex vivo data show that the new algorithm achieves about a factor of 10 speed up over the original Quasi-Newton based algorithm. This improvement in computational efficiency enabled denoising of large data sets containing many diffusion-encoding directions. The denoising performance of the new efficient algorithm is found to be comparable to or even better than that of the original slow algorithm. For an in vivo high-resolution Q-ball acquisition, comparison of fiber tracking results around hippocampus region before and after denoising will also be shown to demonstrate the denoising effects of the new algorithm. Conclusion The optimization problem associated with denoising noncentral χ distributed diffusion-weighted images subject to joint rank and edge constraints can be solved efficiently using an MM-based algorithm. PMID:25733066

  16. Enhanced codebook algorithm for fast moving object detection from dynamic background using scene visual perception

    NASA Astrophysics Data System (ADS)

    Mousse, Mikaël A.; Motamed, Cina; Ezin, Eugène C.

    2016-11-01

    The detection of moving objects in a video sequence is the first step in an automatic video surveillance system. This work proposes an enhancement of a codebook-based algorithm for moving objects extraction. The proposed algorithm used a perceptual-based approach to optimize foreground information extraction complexity by using a modified codebook algorithm. The purpose of the adaptive strategy is to reduce the computational complexity of the foreground detection algorithm while maintaining its global accuracy. In this algorithm, we use a superpixels segmentation approach to model the spatial dependencies between pixels. The processing of the superpixels is controlled to focus it on the superpixels that are near to the possible location of foreground objects. The performance of the proposed algorithm is evaluated and compared to other algorithms of the state of the art using a public dataset that proposes sequences with a dynamic background. Experimental results prove that the proposed algorithm obtained the best the frame processing rate during the foreground detection.

  17. A fast and accurate algorithm for ℓ 1 minimization problems in compressive sampling

    NASA Astrophysics Data System (ADS)

    Chen, Feishe; Shen, Lixin; Suter, Bruce W.; Xu, Yuesheng

    2015-12-01

    An accurate and efficient algorithm for solving the constrained ℓ 1-norm minimization problem is highly needed and is crucial for the success of sparse signal recovery in compressive sampling. We tackle the constrained ℓ 1-norm minimization problem by reformulating it via an indicator function which describes the constraints. The resulting model is solved efficiently and accurately by using an elegant proximity operator-based algorithm. Numerical experiments show that the proposed algorithm performs well for sparse signals with magnitudes over a high dynamic range. Furthermore, it performs significantly better than the well-known algorithm NESTA (a shorthand for Nesterov's algorithm) and DADM (dual alternating direction method) in terms of the quality of restored signals and the computational complexity measured in the CPU-time consumed.

  18. Recursive least-squares algorithms for fast discrete frequency domain equalization

    NASA Astrophysics Data System (ADS)

    Picchi, G.; Prati, G.

    A simple least-squares initialization algorithm (IA) is defined for use with a self-orthogonalizing equalization algorithm in the discrete frequency domain (DFD). A parallel recursive relation is formulated for updating the Kalman vector in the Kalman/Godard algorithm. The DFD is shown to be a modified LS algorithm, thus permitting an exact solution of the LS problem during the equalizer fill-up stage when the data correlation matrix is singular. The solution to the LS problem provides a basis for initialization of the DFD equalizer coefficients. The results of a simulation of on-line initialization of a DFD equalizer with a recursive initialization algorithm demonstrate a weighting capability that minimizes the effects of mean square errors of poorly estimated small-value taps.

  19. A very fast algorithm for simultaneously performing connected-component labeling and euler number computing.

    PubMed

    He, Lifeng; Chao, Yuyan

    2015-09-01

    Labeling connected components and calculating the Euler number in a binary image are two fundamental processes for computer vision and pattern recognition. This paper presents an ingenious method for identifying a hole in a binary image in the first scan of connected-component labeling. Our algorithm can perform connected component labeling and Euler number computing simultaneously, and it can also calculate the connected component (object) number and the hole number efficiently. The additional cost for calculating the hole number is only O(H) , where H is the hole number in the image. Our algorithm can be implemented almost in the same way as a conventional equivalent-label-set-based connected-component labeling algorithm. We prove the correctness of our algorithm and use experimental results for various kinds of images to demonstrate the power of our algorithm.

  20. Application of two oriented partial differential equation filtering models on speckle fringes with poor quality and their numerically fast algorithms.

    PubMed

    Zhu, Xinjun; Chen, Zhanqing; Tang, Chen; Mi, Qinghua; Yan, Xiusheng

    2013-03-20

    In this paper, we are concerned with denoising in experimentally obtained electronic speckle pattern interferometry (ESPI) speckle fringe patterns with poor quality. We extend the application of two existing oriented partial differential equation (PDE) filters, including the second-order single oriented PDE filter and the double oriented PDE filter, to two experimentally obtained ESPI speckle fringe patterns with very poor quality, and compare them with other efficient filtering methods, including the adaptive weighted filter, the improved nonlinear complex diffusion PDE, and the windowed Fourier transform method. All of the five filters have been illustrated to be efficient denoising methods through previous comparative analyses in published papers. The experimental results have demonstrated that the two oriented PDE models are applicable to low-quality ESPI speckle fringe patterns. Then for solving the main shortcoming of the two oriented PDE models, we develop the numerically fast algorithms based on Gauss-Seidel strategy for the two oriented PDE models. The proposed numerical algorithms are capable of accelerating the convergence greatly, and perform significantly better in terms of computational efficiency. Our numerically fast algorithms are extended automatically to some other PDE filtering models.

  1. Fast Sweeping Algorithms for a Class of Hamilton-Jacobi Equations

    DTIC Science & Technology

    2003-05-06

    the above type with variable coefficients. 1.1. Solving eikonal equations. In geometrical optics [10], the eikonal equa- tion √ φ2x + φ 2 y = r(x, y...of iterations for isotropic, homogeneous eikonal equations. This points out a future research direction of bounding the number of sweeping iterations...difficult cases. Key words. Hamilton–Jacobi equations, fast marching, fast sweeping, upwind finite differen- cing, eikonal equations AMS subject

  2. House calls.

    PubMed

    Unwin, Brian K; Tatum, Paul E

    2011-04-15

    House calls provide a unique perspective on patients' environment and health problems. The demand for house calls is expected to increase considerably in future decades as the U.S. population ages. Although study results have been inconsistent, house calls involving multidisciplinary teams may reduce hospital readmissions and long-term care facility stays. Common indications for house calls are management of acute or chronic illnesses, and palliative care. Medicare beneficiaries must meet specific criteria to be eligible for home health services. The INHOMESSS mnemonic provides a checklist for components of a comprehensive house call. In addition to performing a clinical assessment, house calls may involve observing the patient performing daily activities, reconciling medication discrepancies, and evaluating home safety. House calls can be integrated into practice with careful planning, including clustering house calls by geographic location and coordinating visits with other health care professionals and agencies.

  3. Fast computing global structural balance in signed networks based on memetic algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng

    2014-12-01

    Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.

  4. A fast variable step size integration algorithm suitable for computer simulations of physiological systems

    NASA Technical Reports Server (NTRS)

    Neal, L.

    1981-01-01

    A simple numerical algorithm was developed for use in computer simulations of systems which are both stiff and stable. The method is implemented in subroutine form and applied to the simulation of physiological systems.

  5. Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations

    NASA Astrophysics Data System (ADS)

    Cichocki, Andrzej; Phan, Anh-Huy

    Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of efficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representation, that has many potential applications in computational neuroscience, multi-sensory processing, compressed sensing and multidimensional data analysis. We have developed a class of optimized local algorithms which are referred to as Hierarchical Alternating Least Squares (HALS) algorithms. For these purposes, we have performed sequential constrained minimization on a set of squared Euclidean distances. We then extend this approach to robust cost functions using the alpha and beta divergences and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the over-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorithms can be tuned to different noise statistics by adjusting a single parameter. Extensive experimental results confirm the accuracy and computational performance of the developed algorithms, especially, with usage of multi-layer hierarchical NMF approach [3].

  6. Deblending of Simultaneous-source Seismic Data using Fast Iterative Shrinkage-thresholding Algorithm with Firm-thresholding

    NASA Astrophysics Data System (ADS)

    Qu, Shan; Zhou, Hui; Liu, Renwu; Chen, Yangkang; Zu, Shaohuan; Yu, Sa; Yuan, Jiang; Yang, Yahui

    2016-08-01

    In this paper, an improved algorithm is proposed to separate blended seismic data. We formulate the deblending problem as a regularization problem in both common receiver domain and frequency domain. It is suitable for different kinds of coding methods such as random time delay discussed in this paper. Two basic approximation frames, which are iterative shrinkage-thresholding algorithm (ISTA) and fast iterative shrinkage-thresholding algorithm (FISTA), are compared. We also derive the Lipschitz constant used in approximation frames. In order to achieve a faster convergence and higher accuracy, we propose to use firm-thresholding function as the thresholding function in ISTA and FISTA. Two synthetic blended examples demonstrate that the performances of four kinds of algorithms (ISTA with soft- and firm-thresholding, FISTA with soft- and firm-thresholding) are all effective, and furthermore FISTA with a firm-thresholding operator exhibits the most robust behavior. Finally, we show one numerically blended field data example processed by FISTA with firm-thresholding function.

  7. A modified VPPM algorithm of VLC systems suitable for fast dimming environment

    NASA Astrophysics Data System (ADS)

    Lee, Seungwoo; Ahn, Byung-Gu; Ju, MinChul; Park, Youngil

    2016-04-01

    As LED applications with fast dimming appears, it is required that the variable pulse position modulation (VPPM)-based visible light communications (VLC) scheme works in this environment also. With the previous VPPM scheme, however, transmission was made possible only in different dimming levels, not in the transition period. In this work, we propose a novel VPPM scheme to operate even in rapid brightness fluctuation environment. For this purpose, we adopt a stepwise brightness change at the LED and moving average correlation masks to cope with the changing brightness. The implemented VLC testbed demonstrates that the proposed scheme is appropriate for fast dimming environment.

  8. Research on fast algorithm of small UAV navigation in non-linear matrix reductionism method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Fang, Jiancheng; Sheng, Wei; Cao, Juanjuan

    2008-10-01

    The low Reynolds numbers of small UAV will result in unfavorable aerodynamic conditions to support controlled flight. And as operated near ground, the small UAV will be affected seriously by low-frequency interference caused by atmospheric disturbance. Therefore, the GNC system needs high frequency of attitude estimation and control to realize the steady of the UAV. In company with the dimensional of small UAV dwindling away, its GNC system is more and more taken embedded designing technology to reach the purpose of compactness, light weight and low power consumption. At the same time, the operational capability of GNC system also gets limit in a certain extent. Therefore, a kind of high speed navigation algorithm design becomes the imminence demand of GNC system. Aiming at such requirement, a kind of non-linearity matrix reduction approach is adopted in this paper to create a new high speed navigation algorithm which holds the radius of meridian circle and prime vertical circle as constant and linearizes the position matrix calculation formulae of navigation equation. Compared with normal navigation algorithm, this high speed navigation algorithm decreases 17.3% operand. Within small UAV"s mission radius (20km), the accuracy of position error is less than 0.13m. The results of semi-physical experiments and small UAV's auto pilot testing proved that this algorithm can realize high frequency attitude estimation and control. It will avoid low-frequency interference caused by atmospheric disturbance properly.

  9. Optimization of ultra-fast interactions using laser pulse temporal shaping controlled by a deterministic algorithm

    NASA Astrophysics Data System (ADS)

    Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Ruiz de la Cruz, A.; Solis, J.

    2014-02-01

    Femtosecond laser pulse temporal shaping techniques have led to important advances in different research fields like photochemistry, laser physics, non-linear optics, biology, or materials processing. This success is partly related to the use of optimal control algorithms. Due to the high dimensionality of the solution and control spaces, evolutionary algorithms are extensively applied and, among them, genetic ones have reached the status of a standard adaptive strategy. Still, their use is normally accompanied by a reduction of the problem complexity by different modalities of parameterization of the spectral phase. Exploiting Rabitz and co-authors' ideas about the topology of quantum landscapes, in this work we analyze the optimization of two different problems under a deterministic approach, using a multiple one-dimensional search (MODS) algorithm. In the first case we explore the determination of the optimal phase mask required for generating arbitrary temporal pulse shapes and compare the performance of the MODS algorithm to the standard iterative Gerchberg-Saxton algorithm. Based on the good performance achieved, the same method has been applied for optimizing two-photon absorption starting from temporally broadened laser pulses, or from laser pulses temporally and spectrally distorted by non-linear absorption in air, obtaining similarly good results which confirm the validity of the deterministic search approach.

  10. Optimization of ultra-fast interactions using laser pulse temporal shaping controlled by a deterministic algorithm

    NASA Astrophysics Data System (ADS)

    Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Ruiz de la Cruz, A.; Solis, J.

    2013-04-01

    Femtosecond laser pulse temporal shaping techniques have led to important advances in different research fields like photochemistry, laser physics, non-linear optics, biology, or materials processing. This success is partly related to the use of optimal control algorithms. Due to the high dimensionality of the solution and control spaces, evolutionary algorithms are extensively applied and, among them, genetic ones have reached the status of a standard adaptive strategy. Still, their use is normally accompanied by a reduction of the problem complexity by different modalities of parameterization of the spectral phase. Exploiting Rabitz and co-authors' ideas about the topology of quantum landscapes, in this work we analyze the optimization of two different problems under a deterministic approach, using a multiple one-dimensional search (MODS) algorithm. In the first case we explore the determination of the optimal phase mask required for generating arbitrary temporal pulse shapes and compare the performance of the MODS algorithm to the standard iterative Gerchberg-Saxton algorithm. Based on the good performance achieved, the same method has been applied for optimizing two-photon absorption starting from temporally broadened laser pulses, or from laser pulses temporally and spectrally distorted by non-linear absorption in air, obtaining similarly good results which confirm the validity of the deterministic search approach.

  11. A fast and Robust Algorithm for general inequality/equality constrained minimum time problems

    SciTech Connect

    Briessen, B.; Sadegh, N.

    1995-12-01

    This paper presents a new algorithm for solving general inequality/equality constrained minimum time problems. The algorithm`s solution time is linear in the number of Runge-Kutta steps and the number of parameters used to discretize the control input history. The method is being applied to a three link redundant robotic arm with torque bounds, joint angle bounds, and a specified tip path. It solves case after case within a graphical user interface in which the user chooses the initial joint angles and the tip path with a mouse. Solve times are from 30 to 120 seconds on a Hewlett Packard workstation. A zero torque history is always used in the initial guess, and the algorithm has never crashed, indicating its robustness. The algorithm solves for a feasible solution for large trajectory execution time t{sub f} and then reduces t{sub f} and then reduces t{sub f} by a small amount and re-solves. The fixed time re- solve uses a new method of finding a near-minimum-2-norm solution to a set of linear equations and inequalities that achieves quadratic convegence to a feasible solution of the full nonlinear problem.

  12. A variable splitting based algorithm for fast multi-coil blind compressed sensing MRI reconstruction.

    PubMed

    Bhave, Sampada; Lingala, Sajan Goud; Jacob, Mathews

    2014-01-01

    Recent work on blind compressed sensing (BCS) has shown that exploiting sparsity in dictionaries that are learnt directly from the data at hand can outperform compressed sensing (CS) that uses fixed dictionaries. A challenge with BCS however is the large computational complexity during its optimization, which limits its practical use in several MRI applications. In this paper, we propose a novel optimization algorithm that utilize variable splitting strategies to significantly improve the convergence speed of the BCS optimization. The splitting allows us to efficiently decouple the sparse coefficient, and dictionary update steps from the data fidelity term, resulting in subproblems that take closed form analytical solutions, which otherwise require slower iterative conjugate gradient algorithms. Through experiments on multi coil parametric MRI data, we demonstrate the superior performance of BCS over conventional CS schemes, while achieving convergence speed up factors of over 10 fold over the previously proposed implementation of the BCS algorithm.

  13. A Fast Map Merging Algorithm in the Field of Multirobot SLAM

    PubMed Central

    Fan, Xiaoping; Zhang, Heng

    2013-01-01

    In recent years, the research on single-robot simultaneous localization and mapping (SLAM) has made a great success. However, multirobot SLAM faces many challenging problems, including unknown robot poses, unshared map, and unstable communication. In this paper, a map merging algorithm based on virtual robot motion is proposed for multi-robot SLAM. The thinning algorithm is used to construct the skeleton of the grid map's empty area, and a mobile robot is simulated in one map. The simulated data is used as information sources in the other map to do partial map Monte Carlo localization; if localization succeeds, the relative pose hypotheses between the two maps can be computed easily. We verify these hypotheses using the rendezvous technique and use them as initial values to optimize the estimation by a heuristic random search algorithm. PMID:24302855

  14. A fast image super-resolution algorithm using an adaptive Wiener filter.

    PubMed

    Hardie, Russell

    2007-12-01

    A computationally simple super-resolution algorithm using a type of adaptive Wiener filter is proposed. The algorithm produces an improved resolution image from a sequence of low-resolution (LR) video frames with overlapping field of view. The algorithm uses subpixel registration to position each LR pixel value on a common spatial grid that is referenced to the average position of the input frames. The positions of the LR pixels are not quantized to a finite grid as with some previous techniques. The output high-resolution (HR) pixels are obtained using a weighted sum of LR pixels in a local moving window. Using a statistical model, the weights for each HR pixel are designed to minimize the mean squared error and they depend on the relative positions of the surrounding LR pixels. Thus, these weights adapt spatially and temporally to changing distributions of LR pixels due to varying motion. Both a global and spatially varying statistical model are considered here. Since the weights adapt with distribution of LR pixels, it is quite robust and will not become unstable when an unfavorable distribution of LR pixels is observed. For translational motion, the algorithm has a low computational complexity and may be readily suitable for real-time and/or near real-time processing applications. With other motion models, the computational complexity goes up significantly. However, regardless of the motion model, the algorithm lends itself to parallel implementation. The efficacy of the proposed algorithm is demonstrated here in a number of experimental results using simulated and real video sequences. A computational analysis is also presented.

  15. A fast hidden line algorithm with contour option. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Thue, R. E.

    1984-01-01

    The JonesD algorithm was modified to allow the processing of N-sided elements and implemented in conjunction with a 3-D contour generation algorithm. The total hidden line and contour subsystem is implemented in the MOVIE.BYU Display package, and is compared to the subsystems already existing in the MOVIE.BYU package. The comparison reveals that the modified JonesD hidden line and contour subsystem yields substantial processing time savings, when processing moderate sized models comprised of 1000 elements or less. There are, however, some limitations to the modified JonesD subsystem.

  16. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature.

  17. Fast conjugate gradient algorithm extension for analyzer-based imaging reconstruction

    NASA Astrophysics Data System (ADS)

    Caudevilla, Oriol; Brankov, Jovan G.

    2016-04-01

    This paper presents an extension of the classic Conjugate Gradient Algorithm. Motivated by the Analyzer-Based Imaging inverse problem, the novel method maximizes the Poisson regularized log-likelihood with a non-linear transformation of parameter faster than other solutions. The new approach takes advantage of the special properties of the Poisson log-likelihood to conjugate each ascend direction with respect all the previous directions taken by the algorithm. Our solution is compared with the general solution for non-quadratic unconstrained problems: the Polak- Ribiere formula. Both methods are applied to the ABI reconstruction problem.

  18. A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)

    DTIC Science & Technology

    2013-01-22

    performance of algorithms in terms of various error metrics, speed, and robustness to noise. All the experiments are performed in Matlab 7.11 on...online version available, (2011). [17] J.-J. Moreau, Fonctions convexes duales et points proximaux dans un espace hilbertien, C.R. Acad. Sci. Paris Sér

  19. Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm

    PubMed Central

    Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin

    2015-01-01

    Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C68H22 hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K. PMID:26178096

  20. Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm

    SciTech Connect

    Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin

    2015-07-14

    Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C{sub 68}H{sub 22} hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K.

  1. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    PubMed

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  2. A fast multigrid algorithm for energy minimization under planar density constraints.

    SciTech Connect

    Ron, D.; Safro, I.; Brandt, A.; Mathematics and Computer Science; Weizmann Inst. of Science

    2010-09-07

    The two-dimensional layout optimization problem reinforced by the efficient space utilization demand has a wide spectrum of practical applications. Formulating the problem as a nonlinear minimization problem under planar equality and/or inequality density constraints, we present a linear time multigrid algorithm for solving a correction to this problem. The method is demonstrated in various graph drawing (visualization) instances.

  3. A fast algorithm for direction of arrival estimation in multipath environments

    NASA Astrophysics Data System (ADS)

    Tayem, Nizar; Naraghi-Pour, Mort

    2007-04-01

    A new spectral direction of arrival (DOA) estimation algorithm is proposed that can rapidly estimate the DOA of non-coherent as well as coherent incident signals. As such the algorithm is effective for DOA estimation in multi-path environments. The proposed method constructs a data model based on a Hermitian Toeplitz matrix whose rank is related to the DOA of incoming signals and is not affected if the incoming sources are highly correlated. The data is rearranged in such a way that extends the dimensionality of the noise space. Consequently, the signal and noise spaces can be estimated more accurately. The proposed method has several advantages over the well-known classical subspace algorithms such as MUSIC and ESPRIT, as well as the Matrix Pencil (MP) method. In particular, the proposed method is suitable for real-time applications since it does not require multiple snapshots in order to estimate the DOA's. Moreover, no forward/backward spatial smoothing of the covariance matrix is needed, resulting in reduced computational complexity. Finally, the proposed method can estimate the DOA of coherent sources. The simulation results verify that the proposed method outperforms the MUSIC, ESPRIT and Matrix Pencil algorithms.

  4. EMILiO: a fast algorithm for genome-scale strain design.

    PubMed

    Yang, Laurence; Cluett, William R; Mahadevan, Radhakrishnan

    2011-05-01

    Systems-level design of cell metabolism is becoming increasingly important for renewable production of fuels, chemicals, and drugs. Computational models are improving in the accuracy and scope of predictions, but are also growing in complexity. Consequently, efficient and scalable algorithms are increasingly important for strain design. Previous algorithms helped to consolidate the utility of computational modeling in this field. To meet intensifying demands for high-performance strains, both the number and variety of genetic manipulations involved in strain construction are increasing. Existing algorithms have experienced combinatorial increases in computational complexity when applied toward the design of such complex strains. Here, we present EMILiO, a new algorithm that increases the scope of strain design to include reactions with individually optimized fluxes. Unlike existing approaches that would experience an explosion in complexity to solve this problem, we efficiently generated numerous alternate strain designs producing succinate, l-glutamate and l-serine. This was enabled by successive linear programming, a technique new to the area of computational strain design.

  5. Fast Algorithms for Structured Least Squares and Total Least Squares Problems

    PubMed Central

    Kalsi, Anoop; O’Leary, Dianne P.

    2006-01-01

    We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z1 and Z2. We develop formulas for the generators of the matrix M HM in terms of the generators of M and show that the Cholesky factorization of the matrix M HM can be computed quickly if Z1 is close to unitary and Z2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices. PMID:27274922

  6. Fast Algorithms for Structured Least Squares and Total Least Squares Problems.

    PubMed

    Kalsi, Anoop; O'Leary, Dianne P

    2006-01-01

    We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z 1 and Z 2. We develop formulas for the generators of the matrix M (H) M in terms of the generators of M and show that the Cholesky factorization of the matrix M (H) M can be computed quickly if Z 1 is close to unitary and Z 2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices.

  7. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  8. Fast divide-and-conquer algorithm for evaluating polarization in classical force fields

    NASA Astrophysics Data System (ADS)

    Nocito, Dominique; Beran, Gregory J. O.

    2017-03-01

    Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.

  9. Fast parallel DNA-based algorithms for molecular computation: quadratic congruence and factoring integers.

    PubMed

    Chang, Weng-Long

    2012-03-01

    Assume that n is a positive integer. If there is an integer such that M (2) ≡ C (mod n), i.e., the congruence has a solution, then C is said to be a quadratic congruence (mod n). If the congruence does not have a solution, then C is said to be a quadratic noncongruence (mod n). The task of solving the problem is central to many important applications, the most obvious being cryptography. In this article, we describe a DNA-based algorithm for solving quadratic congruence and factoring integers. In additional to this novel contribution, we also show the utility of our encoding scheme, and of the algorithm's submodules. We demonstrate how a variety of arithmetic, shifted and comparative operations, namely bitwise and full addition, subtraction, left shifter and comparison perhaps are performed using strands of DNA.

  10. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  11. A fast beam hardening correction method incorporated in a filtered back-projection based MAP algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Shouhua; Wu, Huazhen; Sun, Yi; Li, Jing; Li, Guang; Gu, Ning

    2017-03-01

    The beam hardening effect can induce strong artifacts in CT images, which result in severely deteriorated image quality with incorrect intensities (CT numbers). This paper develops an effective and efficient beam hardening correction algorithm incorporated in a filtered back-projection based maximum a posteriori (BHC-FMAP). In the proposed algorithm, the beam hardening effect is modeled and incorporated into the forward-projection of the MAP to suppress beam hardening induced artifacts, and the image update process is performed by Feldkamp–Davis–Kress method based back-projection to speed up the convergence. The proposed BHC-FMAP approach does not require information about the beam spectrum or the material properties, or any additional segmentation operation. The proposed method was qualitatively and quantitatively evaluated using both phantom and animal projection data. The experimental results demonstrate that the BHC-FMAP method can efficiently provide a good correction of beam hardening induced artefacts.

  12. A fast beam hardening correction method incorporated in a filtered back-projection based MAP algorithm.

    PubMed

    Luo, Shouhua; Wu, Huazhen; Sun, Yi; Li, Jing; Li, Guang; Gu, Ning

    2017-03-07

    The beam hardening effect can induce strong artifacts in CT images, which result in severely deteriorated image quality with incorrect intensities (CT numbers). This paper develops an effective and efficient beam hardening correction algorithm incorporated in a filtered back-projection based maximum a posteriori (BHC-FMAP). In the proposed algorithm, the beam hardening effect is modeled and incorporated into the forward-projection of the MAP to suppress beam hardening induced artifacts, and the image update process is performed by Feldkamp-Davis-Kress method based back-projection to speed up the convergence. The proposed BHC-FMAP approach does not require information about the beam spectrum or the material properties, or any additional segmentation operation. The proposed method was qualitatively and quantitatively evaluated using both phantom and animal projection data. The experimental results demonstrate that the BHC-FMAP method can efficiently provide a good correction of beam hardening induced artefacts.

  13. A Fourier analysis for a fast simulation algorithm. [for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1988-01-01

    This paper presents a derivation of compact expressions for the Fourier series analysis of the steady-state solution of a typical switching converter. The modeling procedure for the simulation and the steady-state solution is described, and some desirable traits for its matrix exponential subroutine are discussed. The Fourier analysis algorithm was tested on a phase-controlled parallel-loaded resonant converter, providing an experimental confirmation.

  14. Fast and robust ray casting algorithms for virtual X-ray imaging

    NASA Astrophysics Data System (ADS)

    Freud, N.; Duvauchelle, P.; Létang, J. M.; Babot, D.

    2006-07-01

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or γ-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation.

  15. Fast Apriori-based Graph Mining Algorithm and application to 3-dimensional Structure Analysis

    NASA Astrophysics Data System (ADS)

    Nishimura, Yoshio; Washio, Takashi; Yoshida, Tetsuya; Motoda, Hiroshi; Inokuchi, Akihiro; Okada, Takashi

    Apriori-based Graph Mining (AGM) algorithm efficiently extracts all the subgraph patterns which frequently appear in graph structured data. The algorithm can deal with general graph structured data with multiple labels of vartices and edges, and is capable of analyzing the topological structure of graphs. In this paper, we propose a new method to analyze graph structured data for a 3-dimensional coordinate by AGM. In this method the distance between each vertex of a graph is calculated and added to the edge label so that AGM can handle 3-dimensional graph structured data. One problem in our approach is that the number of edge labels increases, which results in the increase of computational time to extract subgraph patterns. To alleviate this problem, we also propose a faster algorithm of AGM by adding an extra constraint to reduce the number of generated candidates for seeking frequent subgraphs. Chemical compounds with dopamine antagonist in MDDR database were analyzed by AGM to characterize their 3-dimensional chemical structure and correlation with physiological activity.

  16. Fast parallel algorithms that compute transitive closure of a fuzzy relation

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.

    1993-01-01

    The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).

  17. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization.

  18. Validation of Supervised Automated Algorithm for Fast Quantitative Evaluation of Organ Motion on Magnetic Resonance Imaging

    SciTech Connect

    Prakash, Varuna; Stainsby, Jeffrey A.; Satkunasingham, Janakan; Craig, Tim; Catton, Charles; Chan, Philip; Dawson, Laura; Hensel, Jennifer; Jaffray, David; Milosevic, Michael; Nichol, Alan; Sussman, Marshall S.; Lockwood, Gina; Menard, Cynthia

    2008-07-15

    Purpose: To validate a correlation coefficient template-matching algorithm applied to the supervised automated quantification of abdominal-pelvic organ motion captured on time-resolved magnetic resonance imaging. Methods and Materials: Magnetic resonance images of 21 patients across four anatomic sites were analyzed. Representative anatomic points of interest were chosen as surrogates for organ motion. The point of interest displacements across each image frame relative to baseline were quantified manually and through the use of a template-matching software tool, termed 'Motiontrack.' Automated and manually acquired displacement measures, as well as the standard deviation of intrafraction motion, were compared for each image frame and for each patient. Results: Discrepancies between the automated and manual displacements of {>=}2 mm were uncommon, ranging in frequency of 0-9.7% (liver and prostate, respectively). The standard deviations of intrafraction motion measured with each method correlated highly (r = 0.99). Considerable interpatient variability in organ motion was demonstrated by a wide range of standard deviations in the liver (1.4-7.5 mm), uterus (1.1-8.4 mm), and prostate gland (0.8-2.7 mm). The automated algorithm performed successfully in all patients but 1 and substantially improved efficiency compared with manual quantification techniques (5 min vs. 60-90 min). Conclusion: Supervised automated quantification of organ motion captured on magnetic resonance imaging using a correlation coefficient template-matching algorithm was efficient, accurate, and may play an important role in off-line adaptive approaches to intrafraction motion management.

  19. Very fast algorithms for evaluating the stability of ML and Bayesian phylogenetic trees from sequence data.

    PubMed

    Waddell, Peter J; Kishino, Hirohisa; Ota, Rissa

    2002-01-01

    Evolutionary trees sit at the core of all realistic models describing a set of related sequences, including alignment, homology search, ancestral protein reconstruction and 2D/3D structural change. It is important to assess the stochastic error when estimating a tree, including models using the most realistic likelihood-based optimizations, yet computation times may be many days or weeks. If so, the bootstrap is computationally prohibitive. Here we show that the extremely fast "resampling of estimated log likelihoods" or RELL method behaves well under more general circumstances than previously examined. RELL approximates the bootstrap (BP) proportions of trees better that some bootstrap methods that rely on fast heuristics to search the tree space. The BIC approximation of the Bayesian posterior probability (BPP) of trees is made more accurate by including an additional term related to the determinant of the information matrix (which may also be obtained as a product of gradient or score vectors). Such estimates are shown to be very close to MCMC chain values. Our analysis of mammalian mitochondrial amino acid sequences suggest that when model breakdown occurs, as it typically does for sequences separated by more than a few million years, the BPP values are far too peaked and the real fluctuations in the likelihood of the data are many times larger than expected. Accordingly, several ways to incorporate the bootstrap and other types of direct resampling with MCMC procedures are outlined. Genes evolve by a process which involves some sites following a tree close to, but not identical with, the species tree. It is seen that under such a likelihood model BP (bootstrap proportions) and BPP estimates may still be reasonable estimates of the species tree. Since many of the methods studied are very fast computationally, there is no reason to ignore stochastic error even with the slowest ML or likelihood based methods.

  20. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    SciTech Connect

    Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  1. A fast algorithm to find optimal controls of multiantenna applicators in regional hyperthermia

    NASA Astrophysics Data System (ADS)

    Köhler, Torsten; Maass, Peter; Wust, Peter; Seebass, Martin

    2001-09-01

    The goal of regional hyperthermia is to heat up deeply located tumours to temperatures above 42 °C while keeping the temperatures in normal tissues below tissue-dependent critical values. The aim of this paper is to describe and analyse functions which can be used for computing hyperthermia treatment plans in line with these criteria. All the functionals considered here can be optimized by efficient numerical methods. We started with the working hypothesis that maximizing the quotient of integral absorbed power inside the tumour and a weighted energy norm outside the tumour leads to clinically useful power distributions which also yield favourable temperature distributions. The presented methods have been implemented and tested with real patient data from the Charité Berlin, Campus Virchow-Klinikum. The results obtained by these fast routines are comparable with those obtained by relatively expensive global optimization techniques. Thus the described methods are very promising for online optimization in a hybrid system for regional hyperthermia where a fast response to MR-based information is important.

  2. PSimScan: Algorithm and Utility for Fast Protein Similarity Search

    PubMed Central

    Kaznadzey, Anna; Alexandrova, Natalia; Novichkov, Vladimir; Kaznadzey, Denis

    2013-01-01

    In the era of metagenomics and diagnostics sequencing, the importance of protein comparison methods of boosted performance cannot be overstated. Here we present PSimScan (Protein Similarity Scanner), a flexible open source protein similarity search tool which provides a significant gain in speed compared to BLASTP at the price of controlled sensitivity loss. The PSimScan algorithm introduces a number of novel performance optimization methods that can be further used by the community to improve the speed and lower hardware requirements of bioinformatics software. The optimization starts at the lookup table construction, then the initial lookup table–based hits are passed through a pipeline of filtering and aggregation routines of increasing computational complexity. The first step in this pipeline is a novel algorithm that builds and selects ‘similarity zones’ aggregated from neighboring matches on small arrays of adjacent diagonals. PSimScan performs 5 to 100 times faster than the standard NCBI BLASTP, depending on chosen parameters, and runs on commodity hardware. Its sensitivity and selectivity at the slowest settings are comparable to the NCBI BLASTP’s and decrease with the increase of speed, yet stay at the levels reasonable for many tasks. PSimScan is most advantageous when used on large collections of query sequences. Comparing the entire proteome of Streptocuccus pneumoniae (2,042 proteins) to the NCBI’s non-redundant protein database of 16,971,855 records takes 6.5 hours on a moderately powerful PC, while the same task with the NCBI BLASTP takes over 66 hours. We describe innovations in the PSimScan algorithm in considerable detail to encourage bioinformaticians to improve on the tool and to use the innovations in their own software development. PMID:23505522

  3. Short communication: imputing genotypes using PedImpute fast algorithm combining pedigree and population information.

    PubMed

    Nicolazzi, E L; Biffani, S; Jansen, G

    2013-04-01

    Routine genomic evaluations frequently include a preliminary imputation step, requiring high accuracy and reduced computing time. A new algorithm, PedImpute (http://dekoppel.eu/pedimpute/), was developed and compared with findhap (http://aipl.arsusda.gov/software/findhap/) and BEAGLE (http://faculty.washington.edu/browning/beagle/beagle.html), using 19,904 Holstein genotypes from a 4-country international collaboration (United States, Canada, UK, and Italy). Different scenarios were evaluated on a sample subset that included only single nucleotide polymorphism from the Bovine low-density (LD) Illumina BeadChip (Illumina Inc., San Diego, CA). Comparative criteria were computing time, percentage of missing alleles, percentage of wrongly imputed alleles, and the allelic squared correlation. Imputation accuracy on ungenotyped animals was also analyzed. The algorithm PedImpute was slightly more accurate and faster than findhap and BEAGLE when sire, dam, and maternal grandsire were genotyped at high density. On the other hand, BEAGLE performed better than both PedImpute and findhap for animals with at least one close relative not genotyped or genotyped at low density. However, computing time and resources using BEAGLE were incompatible with routine genomic evaluations in Italy. Error rate and allelic squared correlation attained by PedImpute ranged from 0.2 to 1.1% and from 96.6 to 99.3%, respectively. When complete genomic information on sire, dam, and maternal grandsire are available, as expected to be the case in the close future in (at least) dairy cattle, and considering accuracies obtained and computation time required, PedImpute represents a valuable choice in routine evaluations among the algorithms tested.

  4. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    SciTech Connect

    Huang, Peng; Andersson, Sean B.

    2014-06-15

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides.

  5. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    SciTech Connect

    Chartrand, Rick

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  6. A fast smoothing algorithm for post-processing of surface reflectance spectra retrieved from airborne imaging spectrometer data.

    PubMed

    Gao, Bo-Cai; Liu, Ming

    2013-10-14

    Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented.

  7. A Fast Smoothing Algorithm for Post-Processing of Surface Reflectance Spectra Retrieved from Airborne Imaging Spectrometer Data

    PubMed Central

    Gao, Bo-Cai; Liu, Ming

    2013-01-01

    Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022

  8. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    PubMed

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-09-04

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.

  9. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  10. Fast Estimation of Defect Profiles from the Magnetic Flux Leakage Signal Based on a Multi-Power Affine Projection Algorithm

    PubMed Central

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-01-01

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314

  11. Analysis of the propagation dynamics and Gouy phase of Airy beams using the fast Fresnel transform algorithm.

    PubMed

    Cottrell, Don M; Davis, Jeffrey A; Berg, Cassidy A; Freeman, Christopher Li

    2014-04-01

    There is great interest in Airy beams because they appear to propagate in a curved path. These beams are usually generated by inserting a cubic phase mask onto the input plane of a Fourier transform system. Here, we utilize a fast Fresnel diffraction algorithm to easily derive both the propagation dynamics and the Gouy phase shift for these beams. The trajectories of these beams can be modified by adding additional linear and quadratic phase terms onto the cubic phase mask. Finally, we have rewritten the equations regarding the propagating Airy beams completely in laboratory coordinates for use by experimentalists. Experimental results are included. We expect that these results will be of great importance in applications of Airy beams.

  12. A Fast Inspection of Tool Electrode and Drilling Depth in EDM Drilling by Detection Line Algorithm

    PubMed Central

    Huang, Kuo-Yi

    2008-01-01

    The purpose of this study was to develop a novel measurement method using a machine vision system. Besides using image processing techniques, the proposed system employs a detection line algorithm that detects the tool electrode length and drilling depth of a workpiece accurately and effectively. Different boundaries of areas on the tool electrode are defined: a baseline between base and normal areas, a ND-line between normal and drilling areas (accumulating carbon area), and a DD-line between drilling area and dielectric fluid droplet on the electrode tip. Accordingly, image processing techniques are employed to extract a tool electrode image, and the centroid, eigenvector, and principle axis of the tool electrode are determined. The developed detection line algorithm (DLA) is then used to detect the baseline, ND-line, and DD-line along the direction of the principle axis. Finally, the tool electrode length and drilling depth of the workpiece are estimated via detected baseline, ND-line, and DD-line. Experimental results show good accuracy and efficiency in estimation of the tool electrode length and drilling depth under different conditions. Hence, this research may provide a reference for industrial application in EDM drilling measurement. PMID:27873790

  13. QuickProbs--a fast multiple sequence alignment algorithm designed for graphics processors.

    PubMed

    Gudyś, Adam; Deorowicz, Sebastian

    2014-01-01

    Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors.

  14. A Fast Inspection of Tool Electrode and Drilling Depth in EDM Drilling by Detection Line Algorithm.

    PubMed

    Huang, Kuo-Yi

    2008-08-21

    The purpose of this study was to develop a novel measurement method using a machine vision system. Besides using image processing techniques, the proposed system employs a detection line algorithm that detects the tool electrode length and drilling depth of a workpiece accurately and effectively. Different boundaries of areas on the tool electrode are defined: a baseline between base and normal areas, a ND-line between normal and drilling areas (accumulating carbon area), and a DD-line between drilling area and dielectric fluid droplet on the electrode tip. Accordingly, image processing techniques are employed to extract a tool electrode image, and the centroid, eigenvector, and principle axis of the tool electrode are determined. The developed detection line algorithm (DLA) is then used to detect the baseline, ND-line, and DD-line along the direction of the principle axis. Finally, the tool electrode length and drilling depth of the workpiece are estimated via detected baseline, ND-line, and DD-line. Experimental results show good accuracy and efficiency in estimation of the tool electrode length and drilling depth under different conditions. Hence, this research may provide a reference for industrial application in EDM drilling measurement.

  15. Lamb waves based fast subwavelength imaging using a DORT-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    He, Jiaze; Yuan, Fuh-Gwo

    2016-02-01

    A Lamb wave-based, subwavelength imaging algorithm is developed for damage imaging in large-scale, plate-like structures based on a decomposition of the time-reversal operator (DORT) method combined with the multiple signal classification (MUSIC) algorithm in the space-frequency domain. In this study, a rapid, hybrid non-contact scanning system was proposed to image an aluminum plate using a piezoelectric linear array for actuation and a laser Doppler vibrometer (LDV) line-scan for sensing. The physics of wave propagation, reflection, and scattering that underlies the response matrix in the DORT method is mathematically formulated in the context of guided waves. The singular value decomposition (SVD) and MUSIC-based imaging condition enable quantifying the damage severity by a `reflectivity' parameter and super-resolution imaging. With the flexibility of this scanning system, a considerably large area can be imaged using lower frequency Lamb waves with limited line-scans. The experimental results showed that the hardware system with a signal processing tool such as the DORT-MUSIC (TR-MUSIC) imaging technique can provide rapid, highly accurate imaging results as well as damage quantification with unknown material properties.

  16. A fast algorithm to compute precise type-2 centroids for real-time control applications.

    PubMed

    Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R

    2015-02-01

    An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.

  17. Volcanic Particle Aggregation: A Fast Algorithm for the Smoluchowski Coagulation Equation

    NASA Astrophysics Data System (ADS)

    Rossi, E.; Bagheri, G.; Bonadonna, C.

    2014-12-01

    Particle aggregation is a key process that significantly affects dispersal and sedimentation of volcanic ash, with obvious implications for the associated hazards. Most theoretical studies of particle aggregation have been based on the Smoluchowski Coagulation Equation (SCE), which describes the expected time evolution of the total grain-size distribution under the hypothesis that particles can collide and stick together following specific mathematical relations (kernels). Nonetheless, the practical application of the SCE to real erupting scenarios is made extremely difficult - if not even impossible - by the large number of Ordinary Differential Equations (ODE) which have to be solved to study the typical sizes of volcanic ash (1 micron to 1 mm). We propose an algorithm to approximate the discrete solutions of the SCE, which can describe the time evolution of the total grain-size distribution of the erupted material with an increased computational efficiency. This algorithm has been applied to observed volcanic eruptions (i.e., Eyjafjallajokull 2010, Sakurajima 2013 and Mt. Saint Helens 1980) to see if the commonly used kernels can explain field data and to study how aggregation processes can modify the tephra dispersal on the ground. Different scenarios of sticking efficiencies and aggregate porosity have been used to test the sensitiveness of the SCE to these parameters. Constraints on these parameters come from field observations and laboratory experiments.

  18. HDS: a fast and hierarchical diamond search algorithm in video motion estimation

    NASA Astrophysics Data System (ADS)

    Gong, Sheng-rong; Zhou, Xiang

    2005-10-01

    As the development of the Internet and communication technology, video coding has been more and more important. When the rate of video transmission is high, the correlation between adjacent video frames is high, too. The cost of coding the difference of the frames is litter than that of coding directly video frames. So, when video streams are coding, motion estimation is usually used to reduce the correlation between video streams in temporal axes. Therefore, motion estimation plays an important role in video coding. The present Diamond Search is accepted as one of the most efficient quick search. In this paper, a new motion estimation based on analysis of Diamond Search is proposed, in which video frames fall into two categories: the violent-motion frames and the moderate-motion frames. Based on the new motion estimation method, a quick hierarchical diamond search algorithm is proposed for the majority of moderate-motion frames. The experimental results have showed that the proposed algorithm is much faster than Diamond Search and obtains the same image quality.

  19. Feasibility of a fast inverse dose optimization algorithm for IMRT via matrix inversion without negative beamlet intensities

    SciTech Connect

    Goldman, S.P.; Chen, J.Z.; Battista, J.J.

    2005-09-15

    A fast optimization algorithm is very important for inverse planning of intensity modulated radiation therapy (IMRT), and for adaptive radiotherapy of the future. Conventional numerical search algorithms such as the conjugate gradient search, with positive beam weight constraints, generally require numerous iterations and may produce suboptimal dose results due to trapping in local minima. A direct solution of the inverse problem using conventional quadratic objective functions without positive beam constraints is more efficient but will result in unrealistic negative beam weights. We present here a direct solution of the inverse problem that does not yield unphysical negative beam weights. The objective function for the optimization of a large number of beamlets is reformulated such that the optimization problem is reduced to a linear set of equations. The optimal set of intensities is found through a matrix inversion, and negative beamlet intensities are avoided without the need for externally imposed ad-hoc constraints. The method has been demonstrated with a test phantom and a few clinical radiotherapy cases, using primary dose calculations. We achieve highly conformal primary dose distributions with very rapid optimization times. Typical optimization times for a single anatomical slice (two dimensional) (head and neck) using a LAPACK matrix inversion routine in a single processor desktop computer, are: 0.03 s for 500 beamlets; 0.28 s for 1000 beamlets; 3.1 s for 2000 beamlets; and 12 s for 3000 beamlets. Clinical implementation will require the additional time of a one-time precomputation of scattered radiation for all beamlets, but will not impact the optimization speed. In conclusion, the new method provides a fast and robust technique to find a global minimum that yields excellent results for the inverse planning of IMRT.

  20. Feasibility of a fast inverse dose optimization algorithm for IMRT via matrix inversion without negative beamlet intensities.

    PubMed

    Goldman, S P; Chen, J Z; Battista, J J

    2005-09-01

    A fast optimization algorithm is very important for inverse planning of intensity modulated radiation therapy (IMRT), and for adaptive radiotherapy of the future. Conventional numerical search algorithms such as the conjugate gradient search, with positive beam weight constraints, generally require numerous iterations and may produce suboptimal dose results due to trapping in local minima. A direct solution of the inverse problem using conventional quadratic objective functions without positive beam constraints is more efficient but will result in unrealistic negative beam weights. We present here a direct solution of the inverse problem that does not yield unphysical negative beam weights. The objective function for the optimization of a large number of beamlets is reformulated such that the optimization problem is reduced to a linear set of equations. The optimal set of intensities is found through a matrix inversion, and negative beamlet intensities are avoided without the need for externally imposed ad-hoc constraints. The method has been demonstrated with a test phantom and a few clinical radiotherapy cases, using primary dose calculations. We achieve highly conformal primary dose distributions with very rapid optimization times. Typical optimization times for a single anatomical slice (two dimensional) (head and neck) using a LAPACK matrix inversion routine in a single processor desktop computer, are: 0.03 s for 500 beamlets; 0.28 s for 1000 beamlets; 3.1 s for 2000 beamlets; and 12 s for 3000 beamlets. Clinical implementation will require the additional time of a one-time precomputation of scattered radiation for all beamlets, but will not impact the optimization speed. In conclusion, the new method provides a fast and robust technique to find a global minimum that yields excellent results for the inverse planning of IMRT.

  1. Energy spectra unfolding of fast neutron sources using the group method of data handling and decision tree algorithms

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Abolfazl; Afrakoti, Iman Esmaili Paeen

    2017-04-01

    Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The 241Am-9Be and 252Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions.

  2. A fast algorithm for treating dielectric discontinuities in charged spherical colloids.

    PubMed

    Xu, Zhenli

    2012-03-01

    Electrostatic interactions between multiple colloids in ionic fluids are attracting much attention in studies of biological and soft matter systems. The evaluation of the polarization surface charges due to the spherical dielectric discontinuities poses a challenging problem to highly efficient computer simulations. In this paper, we propose a new method for fast calculating the electric field of spaced spheres using the multiple reflection expansion. The method uses a technique of recursive reflections among the spherical interfaces based on a formula of the multiple image representation, resulting in a simple, accurate and close-form expression of the surface polarization charges. Numerical calculations of the electric potential energies of charged spheres demonstrate the method is highly accurate with small number of reflections, and thus attractive for the use in practical simulations of related problems such as colloid suspension and macromolecular interactions.

  3. Efficient fast heuristic algorithms for minimum error correction haplotyping from SNP fragments.

    PubMed

    Anaraki, Maryam Pourkamali; Sadeghi, Mehdi

    2014-01-01

    Availability of complete human genome is a crucial factor for genetic studies to explore possible association between the genome and complex diseases. Haplotype, as a set of single nucleotide polymorphisms (SNPs) on a single chromosome, is believed to contain promising data for disease association studies, detecting natural positive selection and recombination hotspots. Various computational methods for haplotype reconstruction from aligned fragment of SNPs have already been proposed. This study presents a novel approach to obtain paternal and maternal haplotypes form the SNP fragments on minimum error correction (MEC) model. Reconstructing haplotypes in MEC model is an NP-hard problem. Therefore, our proposed methods employ two fast and accurate clustering techniques as the core of their procedure to efficiently solve this ill-defined problem. The assessment of our approaches, compared to conventional methods, on two real benchmark datasets, i.e., ACE and DALY, proves the efficiency and accuracy.

  4. A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations

    SciTech Connect

    Rockway, J D; Champagne, N J; Sharpe, R M; Fasenfest, B

    2004-01-14

    Frequency domain techniques are popular for analyzing electromagnetics (EM) and coupled circuit-EM problems. These techniques, such as the method of moments (MoM) and the finite element method (FEM), are used to determine the response of the EM portion of the problem at a single frequency. Since only one frequency is solved at a time, it may take a long time to calculate the parameters for wideband devices. In this paper, a fast frequency sweep based on the Asymptotic Wave Expansion (AWE) method is developed and applied to generalized mixed circuit-EM problems. The AWE method, which was originally developed for lumped-load circuit simulations, has recently been shown to be effective at quasi-static and low frequency full-wave simulations. Here it is applied to a full-wave MoM solver, capable of solving for metals, dielectrics, and coupled circuit-EM problems.

  5. A Fast All-sky Radiation Model for Solar applications (FARMS): Algorithm and performance evaluation

    SciTech Connect

    Xie, Yu; Sengupta, Manajit; Dudhia, Jimy

    2016-10-01

    Radiative transfer (RT) models simulating broadband solar radiation have been widely used by atmospheric scientists to model solar resources for various energy applications such as operational forecasting. Due to the complexity of solving the RT equation, the computation under cloudy conditions can be extremely time-consuming, though many approximations (e.g., two-stream approach and delta-M truncation scheme) have been utilized. Thus, a more efficient RT model is crucial for model developers as a new option for approximating solar radiation at the land surface with minimal loss of accuracy. In this study, we developed a fast all-sky radiation model for solar applications (FARMS) using the simplified clear-sky RT model, REST2, and simulated cloud transmittances and reflectances from the Rapid Radiation Transfer Model (RRTM) with a 16-stream Discrete Ordinates Radiative Transfer (DISORT). Simulated lookup tables (LUTs) of cloud transmittances and reflectances are created by varying cloud optical thicknesses, cloud particle sizes, and solar zenith angles. Equations with optimized parameters are fitted to the cloud transmittances and reflectances to develop the model. The all-sky solar irradiance at the land surface can then be computed rapidly by combining REST2 with the cloud transmittances and reflectances. This new RT model is more than 1,000 times faster than those currently utilized in solar resource assessment and forecasting because it does not explicitly solve the RT equation for each individual cloud condition. Our results indicate that the accuracy of the fast radiative transfer model is comparable to or better than two-stream approximation in term of computing cloud transmittance and solar radiation.

  6. Real-time MRI-guided hyperthermia treatment using a fast adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Stakhursky, Vadim L.; Arabe, Omar; Cheng, Kung-Shan; MacFall, James; Maccarini, Paolo; Craciunescu, Oana; Dewhirst, Mark; Stauffer, Paul; Das, Shiva K.

    2009-04-01

    Magnetic resonance (MR) imaging is promising for monitoring and guiding hyperthermia treatments. The goal of this work is to investigate the stability of an algorithm for online MR thermal image guided steering and focusing of heat into the target volume. The control platform comprised a four-antenna mini-annular phased array (MAPA) applicator operating at 140 MHz (used for extremity sarcoma heating) and a GE Signa Excite 1.5 T MR system, both of which were driven by a control workstation. MR proton resonance frequency shift images acquired during heating were used to iteratively update a model of the heated object, starting with an initial finite element computed model estimate. At each iterative step, the current model was used to compute a focusing vector, which was then used to drive the next iteration, until convergence. Perturbation of the driving vector was used to prevent the process from stalling away from the desired focus. Experimental validation of the performance of the automatic treatment platform was conducted with two cylindrical phantom studies, one homogeneous and one muscle equivalent with tumor tissue (conductivity 50% higher) inserted, with initial focal spots being intentionally rotated 90° and 50° away from the desired focus, mimicking initial setup errors in applicator rotation. The integrated MR-HT treatment platform steered the focus of heating into the desired target volume in two quite different phantom tissue loads which model expected patient treatment configurations. For the homogeneous phantom test where the target was intentionally offset by 90° rotation of the applicator, convergence to the proper phase focus in the target occurred after 16 iterations of the algorithm. For the more realistic test with a muscle equivalent phantom with tumor inserted with 50° applicator displacement, only two iterations were necessary to steer the focus into the tumor target. Convergence improved the heating efficacy (the ratio of integral

  7. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  8. Fast algorithm for a three-dimensional synthetic model of intermittent turbulence

    NASA Astrophysics Data System (ADS)

    Malara, Francesco; Di Mare, Francesca; Nigro, Giuseppina; Sorriso-Valvo, Luca

    2016-11-01

    Synthetic turbulence models are useful tools that provide realistic representations of turbulence, necessary to test theoretical results, to serve as background fields in some numerical simulations, and to test analysis tools. Models of one-dimensional (1D) and 3D synthetic turbulence previously developed still required large computational resources. A "wavelet-based" model of synthetic turbulence, able to produce a field with tunable spectral law, intermittency, and anisotropy, is presented here. The rapid algorithm introduced, based on the classic p -model of intermittent turbulence, allows us to reach a broad spectral range using a modest computational effort. The model has been tested against the standard diagnostics for intermittent turbulence, i.e., the spectral analysis, the scale-dependent statistics of the field increments, and the multifractal analysis, all showing an excellent response.

  9. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays

    PubMed Central

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  10. Theory and implementation of a fast algorithm linear equalizer. [for multiplication-free data detection

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1981-01-01

    The theory and implementation of a multiplication-free linear mean-square error criterion equalizer for data transmission are considered. For many real-time signal processing situations, a large number of multiplications is objectionable. The linear estimation problem on a binary computer is considered where the estimation parameters are constrained to be powers of two and thus all multiplications are replaced by shifts. The optimal solution is obtained from an integer-programming-like problem except that the allowable discrete points are non-integers. The branch-and-bound algorithm is used to obtain the coefficients of the equalization TDL. Specific experimental performance results are given for an equalizer implemented with a 12 bit A/D device and a 8080 microprocessor.

  11. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  12. Fast algorithms for visualizing fluid motion in steady flow on unstructured grids

    NASA Technical Reports Server (NTRS)

    Ueng, S. K.; Sikorski, K.; Ma, Kwan-Liu

    1995-01-01

    The plotting of streamlines is an effective way of visualizing fluid motion in steady flows. Additional information about the flowfield, such as local rotation and expansion, can be shown by drawing in the form of a ribbon or tube. In this paper, we present efficient algorithms for the construction of streamlines, streamribbons and streamtubes on unstructured grids. A specialized version of the Runge-Kutta method has been developed to speed up the integration of particle paths. We have also derived closed-form solutions for calculating angular rotation rate and radius to construct streamribbons and streamtubes, respectively. According to our analysis and test results, these formulations are two to four times better in performance than previous numerical methods. As a large number of traces are calculated, the improved performance could be significant.

  13. Sequential quadratic programming-based fast path planning algorithm subject to no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang

    2016-08-01

    Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.

  14. A Fast and Scalable Kymograph Alignment Algorithm for Nanochannel-Based Optical DNA Mappings

    PubMed Central

    Noble, Charleston; Nilsson, Adam N.; Freitag, Camilla; Beech, Jason P.; Tegenfeldt, Jonas O.; Ambjörnsson, Tobias

    2015-01-01

    Optical mapping by direct visualization of individual DNA molecules, stretched in nanochannels with sequence-specific fluorescent labeling, represents a promising tool for disease diagnostics and genomics. An important challenge for this technique is thermal motion of the DNA as it undergoes imaging; this blurs fluorescent patterns along the DNA and results in information loss. Correcting for this effect (a process referred to as kymograph alignment) is a common preprocessing step in nanochannel-based optical mapping workflows, and we present here a highly efficient algorithm to accomplish this via pattern recognition. We compare our method with the one previous approach, and we find that our method is orders of magnitude faster while producing data of similar quality. We demonstrate proof of principle of our approach on experimental data consisting of melt mapped bacteriophage DNA. PMID:25875920

  15. Fast synchronization algorithms of burst-mode 16 QAM receiver for video-on-demand applications

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua; Codenie, Jan; Qiu, Xing-Zhi; Everaert, Alain; Vandewege, Jan; De Meyer, Karel; Trog, Willy; De Vleeschouwer, A.; Marin, W.

    1997-10-01

    This paper describes a novel TDMA/FDMA combined 16 QAM receiver architecture developed for video-on-demand applications. A burst-operated rapid synchronization scheme is proposed which employs an efficient training preamble for overlapped operation of automatic gain control, carrier phase acquisition and symbol timing alignment. All the dedicated synchronization algorithms are digitally implemented, using field programmable gate arrays (FPGA), for a data rate of 10.8 Mbit/s. Several analytic relationships for control accuracy, acquisition time and signal to noise ratio (S/N) are derived. Experimental results demonstrate that the proposed method significantly decreases the required preamble length to 23 symbols, together with a dynamic range of 11 dB and a sensitivity of minus 56 dBm for a bit-error-rate (BER) of 5 * 10-9. The BER performance with frequency offset and input power variation is also investigated.

  16. A Fast Semiautomatic Algorithm for Centerline-Based Vocal Tract Segmentation

    PubMed Central

    Poznyakovskiy, Anton A.; Mainka, Alexander; Platzek, Ivan; Mürbe, Dirk

    2015-01-01

    Vocal tract morphology is an important factor in voice production. Its analysis has potential implications for educational matters as well as medical issues like voice therapy. The knowledge of the complex adjustments in the spatial geometry of the vocal tract during phonation is still limited. For a major part, this is due to difficulties in acquiring geometry data of the vocal tract in the process of voice production. In this study, a centerline-based segmentation method using active contours was introduced to extract the geometry data of the vocal tract obtained with MRI during sustained vowel phonation. The applied semiautomatic algorithm was found to be time- and interaction-efficient and allowed performing various three-dimensional measurements on the resulting model. The method is suitable for an improved detailed analysis of the vocal tract morphology during speech or singing which might give some insights into the underlying mechanical processes. PMID:26557710

  17. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network.

    PubMed

    Le, Trong-Ngoc; Bao, Pham The; Huynh, Hieu Trung

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.

  18. ICRPfinder: a fast pattern design algorithm for coding sequences and its application in finding potential restriction enzyme recognition sites

    PubMed Central

    Li, Chao; Li, Yuhua; Zhang, Xiangmin; Stafford, Phillip; Dinu, Valentin

    2009-01-01

    Background Restriction enzymes can produce easily definable segments from DNA sequences by using a variety of cut patterns. There are, however, no software tools that can aid in gene building -- that is, modifying wild-type DNA sequences to express the same wild-type amino acid sequences but with enhanced codons, specific cut sites, unique post-translational modifications, and other engineered-in components for recombinant applications. A fast DNA pattern design algorithm, ICRPfinder, is provided in this paper and applied to find or create potential recognition sites in target coding sequences. Results ICRPfinder is applied to find or create restriction enzyme recognition sites by introducing silent mutations. The algorithm is shown capable of mapping existing cut-sites but importantly it also can generate specified new unique cut-sites within a specified region that are guaranteed not to be present elsewhere in the DNA sequence. Conclusion ICRPfinder is a powerful tool for finding or creating specific DNA patterns in a given target coding sequence. ICRPfinder finds or creates patterns, which can include restriction enzyme recognition sites, without changing the translated protein sequence. ICRPfinder is a browser-based JavaScript application and it can run on any platform, in on-line or off-line mode. PMID:19747395

  19. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    PubMed Central

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  20. A novel multi-aperture based sun sensor based on a fast multi-point MEANSHIFT (FMMS) algorithm.

    PubMed

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels.

  1. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  2. Optimal design of groundwater remediation systems using a probabilistic multi-objective fast harmony search algorithm under uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Q.; Wu, J.; Qian, J.

    2013-12-01

    This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation system under uncertainty associated with the hydraulic conductivity of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic Pareto domination ranking and probabilistic niche technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient hydraulic conductivity data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal groundwater remediation system of a two-dimensional hypothetical test problem involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the percentage of mass remaining in the aquifer at the end of the operational period, which uses the Pump-and-Treat (PAT) technology to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is used to demonstrate the effectiveness of the proposed methodology. The MC analysis is taken to each Pareto solutions for every K realization. Then the statistical mean and the upper and lower bounds of uncertainty intervals of 95% confidence level are calculated. The MC analysis results show that all of the Pareto-optimal solutions are located between the upper and lower bounds of the MC analysis. Moreover, the root mean square errors (RMSEs) between the Pareto-optimal solutions by the PMOFHS and the average values of optimal solutions by the MC analysis are 0.0204 for the first objective and 0.0318 for the second objective, quite smaller than those RMSEs between the results by the existing probabilistic multi-objective genetic algorithm (PMOGA) and the MC analysis, 0.0384 and 0.0397, respectively. In

  3. A fast algorithm for control and estimation using a polynomial state-space structure

    NASA Technical Reports Server (NTRS)

    Shults, James R.; Brubaker, Thomas; Lee, Gordon K. F.

    1991-01-01

    One of the major problems associated with the control of flexible structures is the estimation of system states. Since the parameters of the structures are not constant under varying loads and conditions, conventional fixed parameter state estimators can not be used to effectively estimate the states of the system. One alternative is to use a state estimator which adapts to the condition of the system. One such estimator is the Kalman filter. This filter is a time varying recursive digital filter which is based upon a model of the system being measured. This filter adapts the model according to the output of the system. Previously, the Kalman filter has only been used in an off-line capacity due to the computational time required for implementation. With recent advances in computer technology, it is becoming a viable tool for use in the on-line environment. A distributed Kalman filter implementation is described for fast estimation of the state of a flexible arm. A key issue, is the sensor structure and initial work on a distributed sensor that could be used with the Kalman filter is presented.

  4. Development of fast line scanning imaging algorithm for diseased chicken detection

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.

    2005-11-01

    A hyperspectral line-scan imaging system for automated inspection of wholesome and diseased chickens was developed and demonstrated. The hyperspectral imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph. The system used a spectrograph to collect spectral measurements across a pixel-wide vertical linear field of view through which moving chicken carcasses passed. After a series of image calibration procedures, the hyperspectral line-scan images were collected for chickens on a laboratory simulated processing line. From spectral analysis, four key wavebands for differentiating between wholesome and systemically diseased chickens were selected: 413 nm, 472 nm, 515 nm, and 546 nm, and a reference waveband, 622 nm. The ratio of relative reflectance between each key wavelength and the reference wavelength was calculated as an image feature. A fuzzy logic-based algorithm utilizing the key wavebands was developed to identify individual pixels on the chicken surface exhibiting symptoms of systemic disease. Two differentiation methods were built to successfully differentiate 72 systemically diseased chickens from 65 wholesome chickens.

  5. Adaptive GDDA-BLAST: fast and efficient algorithm for protein sequence embedding.

    PubMed

    Hong, Yoojin; Kang, Jaewoo; Lee, Dongwon; van Rossum, Damian B

    2010-10-22

    A major computational challenge in the genomic era is annotating structure/function to the vast quantities of sequence information that is now available. This problem is illustrated by the fact that most proteins lack comprehensive annotations, even when experimental evidence exists. We previously theorized that embedded-alignment profiles (simply "alignment profiles" hereafter) provide a quantitative method that is capable of relating the structural and functional properties of proteins, as well as their evolutionary relationships. A key feature of alignment profiles lies in the interoperability of data format (e.g., alignment information, physio-chemical information, genomic information, etc.). Indeed, we have demonstrated that the Position Specific Scoring Matrices (PSSMs) are an informative M-dimension that is scored by quantitatively measuring the embedded or unmodified sequence alignments. Moreover, the information obtained from these alignments is informative, and remains so even in the "twilight zone" of sequence similarity (<25% identity). Although our previous embedding strategy was powerful, it suffered from contaminating alignments (embedded AND unmodified) and high computational costs. Herein, we describe the logic and algorithmic process for a heuristic embedding strategy named "Adaptive GDDA-BLAST." Adaptive GDDA-BLAST is, on average, up to 19 times faster than, but has similar sensitivity to our previous method. Further, data are provided to demonstrate the benefits of embedded-alignment measurements in terms of detecting structural homology in highly divergent protein sequences and isolating secondary structural elements of transmembrane and ankyrin-repeat domains. Together, these advances allow further exploration of the embedded alignment data space within sufficiently large data sets to eventually induce relevant statistical inferences. We show that sequence embedding could serve as one of the vehicles for measurement of low-identity alignments

  6. A Fast Algorithm for Automatic Detection of Ionospheric Disturbances Using GPS Slant Total Electron Content Data

    NASA Astrophysics Data System (ADS)

    Efendi, Emre; Arikan, Feza; Yarici, Aysenur

    2016-07-01

    Solar, geomagnetic, gravitational and seismic activities cause disturbances in the ionospheric region of upper atmosphere for space based communication, navigation and positioning systems. These disturbances can be categorized with respect to their amplitude, duration and frequency. Typically in the literature, ionospheric disturbances are investigated with gradient based methods on Total Electron Content (TEC) data estimated from ground based dual frequency Global Positioning System (GPS) receivers. In this study, a detection algorithm is developed to determine the variability in Slant TEC (STEC) data. The developed method, namely Differential Rate of TEC (DRoT), is based on Rate of Tec (RoT) method that is widely used in the literature. RoT is usually applied to Vertical TEC (VTEC) and it can be defined as normalized derivative of VTEC. Unfortunately, the resultant data obtained from the application of RoT on VTEC suffer from inaccuracies due to mapping function and the resultant values are very noisy which make it difficult to automatically detect the disturbance due to variability in the ionosphere. The developed DRoT method can be defined as the normalized metric norm (L2) between the RoT and its baseband trend structure. In this study, the error performance of DRoT is determined using synthetic data with variable bounds on the parameter set of amplitude, frequency and period of disturbance. It is observed that DRoT method can detect disturbances in three categories. For DRoT values less than 50%, there is no significant disturbance in STEC data. For DRoT values between 50 to 70 %, a medium scale disturbance can be observed. For DROT values over 70 %, severe disturbances such Large Scale Travelling Ionospheric Disturbances (TID) or plasma bubbles can be observed. When DRoT is applied to the GPS-STECdata for stations in high latitude, equatorial and mid-latitude regions, it is observed that disturbances with amplitudes larger than 10% of the difference between

  7. Permanent prostate implant using high activity seeds and inverse planning with fast simulated annealing algorithm: A 12-year Canadian experience

    SciTech Connect

    Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca

    2007-02-01

    Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.

  8. A fast mode decision algorithm for multiview auto-stereoscopic 3D video coding based on mode and disparity statistic analysis

    NASA Astrophysics Data System (ADS)

    Ding, Cong; Sang, Xinzhu; Zhao, Tianqi; Yan, Binbin; Leng, Junmin; Yuan, Jinhui; Zhang, Ying

    2012-11-01

    Multiview video coding (MVC) is essential for applications of the auto-stereoscopic three-dimensional displays. However, the computational complexity of MVC encoders is tremendously huge. Fast algorithms are very desirable for the practical applications of MVC. Based on joint early termination , the selection of inter-view prediction and the optimization of the process of Inter8×8 modes by comparison, a fast macroblock(MB) mode selection algorithm is presented. Comparing with the full mode decision in MVC, the experimental results show that the proposed algorithm can reduce up to 78.13% on average and maximum 90.21% encoding time with a little increase in bit rates and loss in PSNR.

  9. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    SciTech Connect

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics. In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.

  10. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGES

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  11. A finite rate of innovation algorithm for fast and accurate spike detection from two-photon calcium imaging

    NASA Astrophysics Data System (ADS)

    Oñativia, Jon; Schultz, Simon R.; Dragotti, Pier Luigi

    2013-08-01

    Objective. Inferring the times of sequences of action potentials (APs) (spike trains) from neurophysiological data is a key problem in computational neuroscience. The detection of APs from two-photon imaging of calcium signals offers certain advantages over traditional electrophysiological approaches, as up to thousands of spatially and immunohistochemically defined neurons can be recorded simultaneously. However, due to noise, dye buffering and the limited sampling rates in common microscopy configurations, accurate detection of APs from calcium time series has proved to be a difficult problem. Approach. Here we introduce a novel approach to the problem making use of finite rate of innovation (FRI) theory (Vetterli et al 2002 IEEE Trans. Signal Process. 50 1417-28). For calcium transients well fit by a single exponential, the problem is reduced to reconstructing a stream of decaying exponentials. Signals made of a combination of exponentially decaying functions with different onset times are a subclass of FRI signals, for which much theory has recently been developed by the signal processing community. Main results. We demonstrate for the first time the use of FRI theory to retrieve the timing of APs from calcium transient time series. The final algorithm is fast, non-iterative and parallelizable. Spike inference can be performed in real-time for a population of neurons and does not require any training phase or learning to initialize parameters. Significance. The algorithm has been tested with both real data (obtained by simultaneous electrophysiology and multiphoton imaging of calcium signals in cerebellar Purkinje cell dendrites), and surrogate data, and outperforms several recently proposed methods for spike train inference from calcium imaging data.

  12. Computation of scattering matrix elements of large and complex shaped absorbing particles with multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang

    2015-05-01

    Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.

  13. fast-matmul

    SciTech Connect

    Grey Ballard, Austin Benson

    2014-11-26

    This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fast matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.

  14. A Fetal Electrocardiogram Signal Extraction Algorithm Based on Fast One-Unit Independent Component Analysis with Reference

    PubMed Central

    2016-01-01

    Fetal electrocardiogram (FECG) extraction is very important procedure for fetal health assessment. In this article, we propose a fast one-unit independent component analysis with reference (ICA-R) that is suitable to extract the FECG. Most previous ICA-R algorithms only focused on how to optimize the cost function of the ICA-R and payed little attention to the improvement of cost function. They did not fully take advantage of the prior information about the desired signal to improve the ICA-R. In this paper, we first use the kurtosis information of the desired FECG signal to simplify the non-Gaussian measurement function and then construct a new cost function by directly using a nonquadratic function of the extracted signal to measure its non-Gaussianity. The new cost function does not involve the computation of the difference between the function of the Gaussian random vector and that of the extracted signal, which is time consuming. Centering and whitening are also used to preprocess the observed signal to further reduce the computation complexity. While the proposed method has the same error performance as other improved one-unit ICA-R methods, it actually has lower computation complexity than those other methods. Simulations are performed separately on artificial and real-world electrocardiogram signals. PMID:27703492

  15. Fast mode decision algorithm in MPEG-2 to H.264/AVC transcoding including group of picture structure conversion

    NASA Astrophysics Data System (ADS)

    Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang

    2009-05-01

    The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.

  16. SU-E-T-31: A Fast Finite Size Pencil Beam (FSPB) Convolution Algorithm for a New Co-60 Arc Therapy Machine

    SciTech Connect

    Chibani, O; Eldib, A; Ma, C

    2015-06-15

    Purpose: Present a fast Finite Size Pencil Beam (FSPB) convolution algorithm for a new Co-60 arc therapy machine. The FSPB algorithm accounts for (i) strong angular divergence (short SAD), (ii) heterogeneity effect for primary attenuation, and (iii) source energy spectrum. Methods: The FSPB algorithm is based on a 0.5×0.5-cm2 dose kernel calculated using the GEPTS (Gamma Electron and Positron Transport System) Monte Carlo code. The dose kernel is tabulated using a thin XYZ mesh (0.1 mm steps in lateral directions) for radius less than 1 cm and using an RZ mesh (with varying steps) for larger radial distance. To account for SSD effect, 11 dose kernels with SSDs varying between 30 cm to 80 cm are calculated. Maynord factor and “lateral stretching” are applied to account for differences between closest and actual SSD. Appropriate rotations and second order interpolation are used to calculate the dose from a given beamlet to a point. Results: Accuracy: Dose distributions in water with 80 cm SSD are calculated using the new FSPB convolution algorithm and full Monte Carlo simulation (gold standard). Figs 1–4 show excellent agreements between FSPB and Monte Carlo calculations for different field sizes and at different depths. The dose distribution for a prostate case is calculated using FSPB (Fig.5). Sixty conformal beams with rectum blocking are assumed. Figs 6–8 show the comparison with Monte Carlo simulation based on the same beam apertures. The excellent agreement demonstrates the accuracy of the new algorithm in handling SSD variation, oblique incidence, and scatter contribution.Speed: The FSPB convolution algorithm calculates 28 million dose points per second using a single 2.2-GHz CPU. The present algorithm is seven times faster than a similar algorithm from Gu et al. (Phys. Med. Biol. 54, 2009, 6287–6297). Conclusion: A fast and accurate FSPB convolution algorithm was developed and benchmarked.

  17. Fast ray-tracing algorithm for circumstellar structures (FRACS). I. Algorithm description and parameter-space study for mid-IR interferometry of B[e] stars

    NASA Astrophysics Data System (ADS)

    Niccolini, G.; Bendjoya, P.; Domiciano de Souza, A.

    2011-01-01

    Aims: The physical interpretation of spectro-interferometric data is strongly model-dependent. On one hand, models involving elaborate radiative transfer solvers are too time consuming in general to perform an automatic fitting procedure and derive astrophysical quantities and their related errors. On the other hand, using simple geometrical models does not give sufficient insights into the physics of the object. We propose to stand in between these two extreme approaches by using a physical but still simple parameterised model for the object under consideration. Based on this philosophy, we developed a numerical tool optimised for mid-infrared (mid-IR) interferometry, the fast ray-tracing algorithm for circumstellar structures (FRACS), which can be used as a stand-alone model, or as an aid for a more advanced physical description or even for elaborating observation strategies. Methods: FRACS is based on the ray-tracing technique without scattering, but supplemented with the use of quadtree meshes and the full symmetries of the axisymmetrical problem to significantly decrease the necessary computing time to obtain e.g. monochromatic images and visibilities. We applied FRACS in a theoretical study of the dusty circumstellar environments (CSEs) of B[e] supergiants (sgB[e]) in order to determine which information (physical parameters) can be retrieved from present mid-IR interferometry (flux and visibility). Results: From a set of selected dusty CSE models typical of sgB[e] stars we show that together with the geometrical parameters (position angle, inclination, inner radius), the temperature structure (inner dust temperature and gradient) can be well constrained by the mid-IR data alone. Our results also indicate that the determination of the parameters characterising the CSE density structure is more challenging but, in some cases, upper limits as well as correlations on the parameters characterising the mass loss can be obtained. Good constraints for the sg

  18. Assessment of visual quality and spatial accuracy of fast anisotropic diffusion and scan conversion algorithms for real-time three-dimensional spherical ultrasound

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Angelini, Elsa D.; Laine, Andrew

    2004-04-01

    Three-dimensional ultrasound machines based on matrix phased-array transducers are gaining predominance for real-time dynamic screening in cardiac and obstetric practice. These transducers array acquire three-dimensional data in spherical coordinates along lines tiled in azimuth and elevation angles at incremental depth. This study aims at evaluating fast filtering and scan conversion algorithms applied in the spherical domain prior to visualization into Cartesian coordinates for visual quality and spatial measurement accuracy. Fast 3d scan conversion algorithms were implemented and with different order interpolation kernels. Downsizing and smoothing of sampling artifacts were integrated in the scan conversion process. In addition, a denoising scheme for spherical coordinate data with 3d anisotropic diffusion was implemented and applied prior to scan conversion to improve image quality. Reconstruction results under different parameter settings, such as different interpolation kernels, scaling factor, smoothing options, and denoising, are reported. Image quality was evaluated on several data sets via visual inspections and measurements of cylinder objects dimensions. Error measurements of the cylinder's radius, reported in this paper, show that the proposed fast scan conversion algorithm can correctly reconstruct three-dimensional ultrasound in Cartesian coordinates under tuned parameter settings. Denoising via three-dimensional anisotropic diffusion was able to greatly improve the quality of resampled data without affecting the accuracy of spatial information after the modification of the introduction of a variable gradient threshold parameter.

  19. The digital algorithm for fast detecting and identifying the asymmetry of voltages in three-phase electric grids of mechanical engineering facilities

    NASA Astrophysics Data System (ADS)

    Shonin, O. B.; Kryltcov, S. B.; Novozhilov, N. G.

    2017-02-01

    The paper considers a new technique for the fast method of extracting symmetrical components of unbalanced voltages caused by the faults in electric grids of mechanical engineering facilities. The proposed approach is based on the iterative algorithm that checks if the set of at least three voltage discrete measurements belongs to a specific ellipse trajectory of the voltage space vector. Using classification of unbalanced faults in the grid and results of decomposing the voltages into symmetrical components, the algorithm is capable to discriminate between one-phase, two-phase and three-phase voltage sags. The paper concludes that results of simulation in Simulink environment have proved the correctness of the proposed algorithm for detecting and identifying the unbalanced voltage sags in the electrical grid under condition that it is free from high order harmonics.

  20. A comparative study of Powell's and Downhill Simplex algorithms for a fast multimodal surface matching in brain imaging.

    PubMed

    Bernon, J L; Boudousq, V; Rohmer, J F; Fourcade, M; Zanca, M; Rossi, M; Mariano-Goulart, D

    2001-01-01

    Multimodal images registration can be very helpful for diagnostic applications. However, even if a lot of registration algorithms exist, only a few really work in clinical routines. We developed a method based on surface matching and compared two minimization algorithms: Powell's and Downhill Simplex. We studied the influence of some factors (chamfer map computation, number and order of parameters to determine, minimization criteria) on the final accuracy of the algorithm. Using this comparison, we improved some processing steps to allow a clinical use, and selected the simplex algorithm which presented the best results.

  1. Fast inter-mode decision algorithm for high-efficiency video coding based on similarity of coding unit segmentation and partition mode between two temporally adjacent frames

    NASA Astrophysics Data System (ADS)

    Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo; Li, Yuan

    2013-04-01

    High-efficiency video coding (HEVC) introduces a flexible hierarchy of three block structures: coding unit (CU), prediction unit (PU), and transform unit (TU), which have brought about higher coding efficiency than the current national video coding standard H.264/advanced video coding (AVC). HEVC, however, simultaneously requires higher computational complexity than H.264/AVC, although several fast inter-mode decisions were proposed in its development. To further reduce this complexity, a fast inter-mode decision algorithm is proposed based on temporal correlation. Because of the distinct difference of inter-prediction block between HEVC and H.264/AVC, in order to use the temporal correlation to speed up the inter prediction, the correlation of inter-prediction between two adjacent frames needs to be analyzed according to the structure of CU and PU in HEVC. The probabilities of all the partition modes in all sizes of CU and the similarity of CU segmentation and partition modes between two adjacent frames are tested. The correlation of partition modes between two CUs with different sizes in two adjacent frames is tested and analyzed. Based on the characteristics tested and analyzed, at most, two prior partition modes are evaluated for each level of CU, which reduces the number of rate distortion cost calculations. The simulation results show that the proposed algorithm further reduces coding time by 33.0% to 43.3%, with negligible loss in bitrate and peak signal-to-noise ratio, on the basis of the fast inter-mode decision algorithms in current HEVC reference software HM7.0.

  2. EOF-based regression algorithm for the fast retrieval of atmospheric CO2 total column amount from the GOSAT observations

    NASA Astrophysics Data System (ADS)

    Bril, Andrey; Maksyutov, Shamil; Belikov, Dmitry; Oshchepkov, Sergey; Yoshida, Yukio; Deutscher, Nicholas M.; Griffith, David; Hase, Frank; Kivi, Rigel; Morino, Isamu; Notholt, Justus; Pollard, David F.; Sussmann, Ralf; Velazco, Voltaire A.; Warneke, Thorsten

    2017-03-01

    This paper presents a novel retrieval algorithm for the rapid retrieval of the carbon dioxide total column amounts from high resolution spectra in the short wave infrared (SWIR) range observations by the Greenhouse gases Observing Satellite (GOSAT). The algorithm performs EOF (Empirical Orthogonal Function)-based decomposition of the measured spectral radiance and derives the relationship of limited number of the decomposition coefficients in terms of the principal components with target gas amount and a priori data such as airmass, surface pressure, etc. The regression formulae for retrieving target gas amounts are derived using training sets of collocated GOSAT and ground-based observations. The precision/accuracy characteristics of the algorithm are analyzed by the comparison of the retrievals with those from the Total Carbon Column Observing Network (TCCON) measurements and with the modeled data, and appear similar to those achieved by full-physics retrieval algorithms.

  3. Acceleration of canonical molecular dynamics simulations using macroscopic expansion of the fast multipole method combined with the multiple timestep integrator algorithm

    NASA Astrophysics Data System (ADS)

    Kawata, Masaaki; Mikami, Masuhiro

    A canonical molecular dynamics (MD) simulation was accelerated by using an efficient implementation of the multiple timestep integrator algorithm combined with the periodic fast multiple method (MEFMM) for both Coulombic and van der Waals interactions. Although a significant reduction in computational cost has been obtained previously by using the integrated method, in which the MEFMM was used only to calculate Coulombic interactions (Kawata, M., and Mikami, M., 2000, J. Comput. Chem., in press), the extension of this method to include van der Waals interactions yielded further acceleration of the overall MD calculation by a factor of about two. Compared with conventional methods, such as the velocity-Verlet algorithm combined with the Ewald method (timestep of 0.25fs), the speedup by using the extended integrated method amounted to a factor of 500 for a 100 ps simulation. Therefore, the extended method reduces substantially the computational effort of large scale MD simulations.

  4. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  5. PairMotifChIP: A Fast Algorithm for Discovery of Patterns Conserved in Large ChIP-seq Data Sets

    PubMed Central

    Huo, Hongwei; Feng, Dazheng

    2016-01-01

    Identifying conserved patterns in DNA sequences, namely, motif discovery, is an important and challenging computational task. With hundreds or more sequences contained, the high-throughput sequencing data set is helpful to improve the identification accuracy of motif discovery but requires an even higher computing performance. To efficiently identify motifs in large DNA data sets, a new algorithm called PairMotifChIP is proposed by extracting and combining pairs of l-mers in the input with relatively small Hamming distance. In particular, a method for rapidly extracting pairs of l-mers is designed, which can be used not only for PairMotifChIP, but also for other DNA data mining tasks with the same demand. Experimental results on the simulated data show that the proposed algorithm can find motifs successfully and runs faster than the state-of-the-art motif discovery algorithms. Furthermore, the validity of the proposed algorithm has been verified on real data. PMID:27843946

  6. Fast imputation using medium- or low-coverage sequence data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Direct imputation from raw sequence reads can be more accurate than calling genotypes first and then imputing, especially if read depth is low or error rates high, but different imputation strategies are required than those used for data from genotyping chips. A fast algorithm to impute from lower t...

  7. Proof of uniform sampling of binary matrices with fixed row sums and column sums for the fast Curveball algorithm

    NASA Astrophysics Data System (ADS)

    Carstens, C. J.

    2015-04-01

    Randomization of binary matrices has become one of the most important quantitative tools in modern computational biology. The equivalent problem of generating random directed networks with fixed degree sequences has also attracted a lot of attention. However, it is very challenging to generate truly unbiased random matrices with fixed row and column sums. Strona et al. [Nat. Commun. 5, 4114 (2014), 10.1038/ncomms5114] introduce the innovative Curveball algorithm and give numerical support for the proposition that it generates truly random matrices. In this paper, we present a rigorous proof of convergence to the uniform distribution. Furthermore, we show the Curveball algorithm must include certain failed trades to ensure uniform sampling.

  8. Fast and accurate auto-focusing algorithm based on the combination of depth from focus and improved depth from defocus.

    PubMed

    Zhang, Xuedian; Liu, Zhaoqing; Jiang, Minshan; Chang, Min

    2014-12-15

    An auto-focus method for digital imaging systems is proposed that combines depth from focus (DFF) and improved depth from defocus (DFD). The traditional DFD method is improved to become more rapid, which achieves a fast initial focus. The defocus distance is first calculated by the improved DFD method. The result is then used as a search step in the searching stage of the DFF method. A dynamic focusing scheme is designed for the control software, which is able to eliminate environmental disturbances and other noises so that a fast and accurate focus can be achieved. An experiment is designed to verify the proposed focusing method and the results show that the method's efficiency is at least 3-5 times higher than that of the traditional DFF method.

  9. Extended Vofire algorithm for fast transient fluid-structure dynamics with liquid-gas flows and interfaces

    NASA Astrophysics Data System (ADS)

    Faucher, Vincent; Kokh, Samuel

    2013-05-01

    The present paper is dedicated to the simulation of liquid-gas flows with interfaces in the framework of fast transient fluid-structure dynamics. The two-fluid interface is modelled as a discontinuity surface in the fluid property. We use an anti-dissipative Finite-Volume discretization strategy for unstructured meshes in order to capture the position of the interface within a thin diffused volume. This allows to control the numerical diffusion of the artificial mixing between components and provide an accurate capture of complex interface motions. This scheme is an extension of the Vofire numerical solver. We propose specific developments in order to handle flows that involved high density ratio between liquid and gas. The resulting scheme capabilities are validated on basic examples and also tested against large scale fluid-structure test derived from the MARA 10 experiment. All simulations are performed using EUROPLEXUS fast transient dynamics software.

  10. A fast algorithm to predict spectral broadening in CW bidirectionally pumped high-power Yb-doped fiber lasers

    NASA Astrophysics Data System (ADS)

    Szabó, Áron; Várallyay, Zoltán; Rosales-Garcia, Andrea; Headley, Clifford

    2015-11-01

    A detailed, fast-converging iterative numerical method has been developed to model continuous-wave bidirectionally pumped Yb-doped fiber lasers with output powers of several hundred watts. The analysis shows nonlinearity-induced broadening of the lasing spectrum, which also modifies power efficiency. Cavity dynamics is described by combining the effects of Kerr nonlinearities with power evolution equations and rate equations. The fast method to find steady-state solutions for cavity setups is based on setting the temporal phase evolution as a stochastic process with proper spectral filtering. Spectral properties of bidirectionally pumped lasers are calculated within few minutes using a commercial desktop computer, and very good agreement with experimental measurements is obtained for up to 922 W total pump and 708 W output power.

  11. Probabilistic priority assessment of nurse calls.

    PubMed

    Ongenae, Femke; Myny, Dries; Dhaene, Tom; Defloor, Tom; Van Goubergen, Dirk; Verhoeve, Piet; Decruyenaere, Johan; De Turck, Filip

    2014-05-01

    Current nurse call systems are very static. Call buttons are fixed to the wall, and systems do not account for various factors specific to a situation. We have developed a software platform, the ontology-based Nurse Call System (oNCS), which supports the transition to mobile and wireless nurse call buttons and uses an intelligent algorithm to address nurse calls. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff and patients into account by using an ontology. This article describes a probabilistic extension of the oNCS that supports a more sophisticated nurse call algorithm by dynamically assigning priorities to calls based on the risk factors of the patient and the kind of call. The probabilistic oNCS is evaluated through implementation of a prototype and simulations, based on a detailed dataset obtained from 3 nursing departments of Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls among nurses, and the assignment of priorities to calls are compared for the oNCS and the current nurse call system. Additionally, the performance of the system and the parameters of the priority assignment algorithm are explored. The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the probabilistic oNCS significantly improves the assignment of nurses to calls. Calls generally result in a nurse being present more quickly, the workload distribution among the nurses improves, and the priorities and kinds of calls are taken into account.

  12. Robust and fast characterization of OCT-based optical attenuation using a novel frequency-domain algorithm for brain cancer detection

    NASA Astrophysics Data System (ADS)

    Yuan, Wu; Kut, Carmen; Liang, Wenxuan; Li, Xingde

    2017-03-01

    Cancer is known to alter the local optical properties of tissues. The detection of OCT-based optical attenuation provides a quantitative method to efficiently differentiate cancer from non-cancer tissues. In particular, the intraoperative use of quantitative OCT is able to provide a direct visual guidance in real time for accurate identification of cancer tissues, especially these without any obvious structural layers, such as brain cancer. However, current methods are suboptimal in providing high-speed and accurate OCT attenuation mapping for intraoperative brain cancer detection. In this paper, we report a novel frequency-domain (FD) algorithm to enable robust and fast characterization of optical attenuation as derived from OCT intensity images. The performance of this FD algorithm was compared with traditional fitting methods by analyzing datasets containing images from freshly resected human brain cancer and from a silica phantom acquired by a 1310 nm swept-source OCT (SS-OCT) system. With graphics processing unit (GPU)-based CUDA C/C++ implementation, this new attenuation mapping algorithm can offer robust and accurate quantitative interpretation of OCT images in real time during brain surgery.

  13. Robust and fast characterization of OCT-based optical attenuation using a novel frequency-domain algorithm for brain cancer detection

    PubMed Central

    Yuan, Wu; Kut, Carmen; Liang, Wenxuan; Li, Xingde

    2017-01-01

    Cancer is known to alter the local optical properties of tissues. The detection of OCT-based optical attenuation provides a quantitative method to efficiently differentiate cancer from non-cancer tissues. In particular, the intraoperative use of quantitative OCT is able to provide a direct visual guidance in real time for accurate identification of cancer tissues, especially these without any obvious structural layers, such as brain cancer. However, current methods are suboptimal in providing high-speed and accurate OCT attenuation mapping for intraoperative brain cancer detection. In this paper, we report a novel frequency-domain (FD) algorithm to enable robust and fast characterization of optical attenuation as derived from OCT intensity images. The performance of this FD algorithm was compared with traditional fitting methods by analyzing datasets containing images from freshly resected human brain cancer and from a silica phantom acquired by a 1310 nm swept-source OCT (SS-OCT) system. With graphics processing unit (GPU)-based CUDA C/C++ implementation, this new attenuation mapping algorithm can offer robust and accurate quantitative interpretation of OCT images in real time during brain surgery. PMID:28327613

  14. Robust and fast characterization of OCT-based optical attenuation using a novel frequency-domain algorithm for brain cancer detection.

    PubMed

    Yuan, Wu; Kut, Carmen; Liang, Wenxuan; Li, Xingde

    2017-03-22

    Cancer is known to alter the local optical properties of tissues. The detection of OCT-based optical attenuation provides a quantitative method to efficiently differentiate cancer from non-cancer tissues. In particular, the intraoperative use of quantitative OCT is able to provide a direct visual guidance in real time for accurate identification of cancer tissues, especially these without any obvious structural layers, such as brain cancer. However, current methods are suboptimal in providing high-speed and accurate OCT attenuation mapping for intraoperative brain cancer detection. In this paper, we report a novel frequency-domain (FD) algorithm to enable robust and fast characterization of optical attenuation as derived from OCT intensity images. The performance of this FD algorithm was compared with traditional fitting methods by analyzing datasets containing images from freshly resected human brain cancer and from a silica phantom acquired by a 1310 nm swept-source OCT (SS-OCT) system. With graphics processing unit (GPU)-based CUDA C/C++ implementation, this new attenuation mapping algorithm can offer robust and accurate quantitative interpretation of OCT images in real time during brain surgery.

  15. Genetic algorithm based fast alignment method for strap-down inertial navigation system with large azimuth misalignment.

    PubMed

    He, Hongyang; Xu, Jiangning; Qin, Fangjun; Li, Feng

    2015-11-01

    In order to shorten the alignment time and eliminate the small initial misalignment limit for compass alignment of strap-down inertial navigation system (SINS), which is sometimes not easy to satisfy when the ship is moored or anchored, an optimal model based time-varying parameter compass alignment algorithm is proposed in this paper. The contributions of the work presented here are twofold. First, the optimization of compass alignment parameters, which involves a lot of trial-and-error traditionally, is achieved based on genetic algorithm. On this basis, second, the optimal parameter varying model is established by least-square polynomial fitting. Experiments are performed with a navigational grade fiber optical gyroscope SINS, which validate the efficiency of the proposed method.

  16. Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path.

    PubMed

    Herráez, Miguel Arevallilo; Burton, David R; Lalor, Michael J; Gdeisat, Munther A

    2002-12-10

    We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.

  17. Very Fast Algorithms and Detection Performance of Multi-Channel and 2-D Parametric Adaptive Matched Filters for Airborne Radar

    DTIC Science & Technology

    2007-06-05

    tive to the AMF, [1] and [5] discovered that multi-channel and two-dimensional parametric estimation approaches could (1) reduce the computational...dimensional (2-D) parametric estimation using the 2-D least-squares-based lattice algorithm [4]. The specifics of the inverse are found in the next...non- parametric estimation techniques • Least square error (LSE) vs mean square error (MSE) • Primarily multi-channel (M-C) structures; also try 2-D

  18. Fast 4D cone-beam reconstruction using the McKinnon-Bates algorithm with truncation correction and nonlinear filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Ziyi; Sun, Mingshan; Pavkovich, John; Star-Lack, Josh

    2011-03-01

    A challenge in using on-board cone beam computed tomography (CBCT) to image lung tumor motion prior to radiation therapy treatment is acquiring and reconstructing high quality 4D images in a sufficiently short time for practical use. For the 1 minute rotation times typical of Linacs, severe view aliasing artifacts, including streaks, are created if a conventional phase-correlated FDK reconstruction is performed. The McKinnon-Bates (MKB) algorithm provides an efficient means of reducing streaks from static tissue but can suffer from low SNR and other artifacts due to data truncation and noise. We have added truncation correction and bilateral nonlinear filtering to the MKB algorithm to reduce streaking and improve image quality. The modified MKB algorithm was implemented on a graphical processing unit (GPU) to maximize efficiency. Results show that a nearly 4x improvement in SNR is obtained compared to the conventional FDK phase-correlated reconstruction and that high quality 4D images with 0.4 second temporal resolution and 1 mm3 isotropic spatial resolution can be reconstructed in less than 20 seconds after data acquisition completes.

  19. A fast and explicit algorithm for simulating the dynamics of small dust grains with smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Price, Daniel J.; Laibe, Guillaume

    2015-07-01

    We describe a simple method for simulating the dynamics of small grains in a dusty gas, relevant to micron-sized grains in the interstellar medium and grains of centimetre size and smaller in protoplanetary discs. The method involves solving one extra diffusion equation for the dust fraction in addition to the usual equations of hydrodynamics. This `diffusion approximation for dust' is valid when the dust stopping time is smaller than the computational timestep. We present a numerical implementation using smoothed particle hydrodynamics that is conservative, accurate and fast. It does not require any implicit timestepping and can be straightforwardly ported into existing 3D codes.

  20. Simple and fast spectral domain algorithm for quantitative phase imaging of living cells with digital holographic microscopy.

    PubMed

    Min, Junwei; Yao, Baoli; Ketelhut, Steffi; Engwer, Christian; Greve, Burkhard; Kemper, Björn

    2017-01-15

    We present a simple and fast phase aberration compensation method in digital holographic microscopy (DHM) for quantitative phase imaging of living cells. By analyzing the frequency spectrum of an off-axis hologram, phase aberrations can be compensated for automatically without fitting or pre-knowledge of the setup and/or the object. Simple and effective computation makes the method suitable for quantitative online monitoring with highly variable DHM systems. Results from automated quantitative phase imaging of living NIH-3T3 mouse fibroblasts demonstrate the effectiveness and the feasibility of the method.

  1. Backbone building from quadrilaterals: a fast and accurate algorithm for protein backbone reconstruction from alpha carbon coordinates.

    PubMed

    Gront, Dominik; Kmiecik, Sebastian; Kolinski, Andrzej

    2007-07-15

    In this contribution, we present an algorithm for protein backbone reconstruction that comprises very high computational efficiency with high accuracy. Reconstruction of the main chain atomic coordinates from the alpha carbon trace is a common task in protein modeling, including de novo structure prediction, comparative modeling, and processing experimental data. The method employed in this work follows the main idea of some earlier approaches to the problem. The details and careful design of the present approach are new and lead to the algorithm that outperforms all commonly used earlier applications. BBQ (Backbone Building from Quadrilaterals) program has been extensively tested both on native structures as well as on near-native decoy models and compared with the different available existing methods. Obtained results provide a comprehensive benchmark of existing tools and evaluate their applicability to a large scale modeling using a reduced representation of protein conformational space. The BBQ package is available for downloading from our website at http://biocomp.chem.uw.edu.pl/services/BBQ/. This webpage also provides a user manual that describes BBQ functions in detail.

  2. A Fast Parallel Algorithm for Selected Inversion of Structured Sparse Matrices with Application to 2D Electronic Structure Calculations

    SciTech Connect

    Lin, Lin; Yang, Chao; Lu, Jiangfeng; Ying, Lexing; E, Weinan

    2009-09-25

    We present an efficient parallel algorithm and its implementation for computing the diagonal of $H^-1$ where $H$ is a 2D Kohn-Sham Hamiltonian discretized on a rectangular domain using a standard second order finite difference scheme. This type of calculation can be used to obtain an accurate approximation to the diagonal of a Fermi-Dirac function of $H$ through a recently developed pole-expansion technique \\cite{LinLuYingE2009}. The diagonal elements are needed in electronic structure calculations for quantum mechanical systems \\citeHohenbergKohn1964, KohnSham 1965,DreizlerGross1990. We show how elimination tree is used to organize the parallel computation and how synchronization overhead is reduced by passing data level by level along this tree using the technique of local buffers and relative indices. We analyze the performance of our implementation by examining its load balance and communication overhead. We show that our implementation exhibits an excellent weak scaling on a large-scale high performance distributed parallel machine. When compared with standard approach for evaluating the diagonal a Fermi-Dirac function of a Kohn-Sham Hamiltonian associated a 2D electron quantum dot, the new pole-expansion technique that uses our algorithm to compute the diagonal of $(H-z_i I)^-1$ for a small number of poles $z_i$ is much faster, especially when the quantum dot contains many electrons.

  3. Fast characterization of line-end shortening and application of novel correction algorithms in e-beam direct write

    NASA Astrophysics Data System (ADS)

    Freitag, Martin; Choi, Kang-Hoon; Gutsch, Manuela; Hohle, Christoph; Galler, Reinhard; Krüger, Michael; Weidenmueller, Ulf

    2011-04-01

    For the manufacturing of semiconductor technologies following the ITRS roadmap, we will face the nodes well below 32nm half pitch in the next 2~3 years. Despite being able to achieve the required resolution, which is now possible with electron beam direct write variable shaped beam (EBDW VSB) equipment and resists, it becomes critical to precisely reproduce dense line space patterns onto a wafer. This exposed pattern must meet the targets from the layout in both dimensions (horizontally and vertically). For instance, the end of a line must be printed in its entire length to allow a later placed contact to be able to land on it. Up to now, the control of printed patterns such as line ends is achieved by a proximity effect correction (PEC) which is mostly based on a dose modulation. This investigation of the line end shortening (LES) includes multiple novel approaches, also containing an additional geometrical correction, to push the limits of the available data preparation algorithms and the measurement. The designed LES test patterns, which aim to characterize the status of LES in a quick and easy way, were exposed and measured at Fraunhofer Center Nanoelectronic Technologies (CNT) using its state of the art electron beam direct writer and CD-SEM. Simulation and exposure results with the novel LES correction algorithms applied to the test pattern and a large production like pattern in the range of our target CDs in dense line space features smaller than 40nm will be shown.

  4. On-sky tests of the CuReD and HWR fast wavefront reconstruction algorithms with CANARY

    NASA Astrophysics Data System (ADS)

    Bitenc, Urban; Basden, Alastair; Bharmal, Nazim Ali; Morris, Tim; Dipper, Nigel; Gendron, Eric; Vidal, Fabrice; Gratadour, Damien; Rousset, Gérard; Myers, Richard

    2015-04-01

    CuReD (Cumulative Reconstructor with domain Decomposition) and HWR (Hierarchical Wavefront Reconstructor) are novel wavefront reconstruction algorithms for the Shack-Hartmann wavefront sensor, used in the single-conjugate adaptive optics. For a high-order system they are much faster than the traditional matrix-vector-multiplication method. We have developed three methods for mapping the reconstructed phase into the deformable mirror actuator commands and have tested both reconstructors with the CANARY instrument. We find out that the CuReD reconstructor runs stably only if the feedback loop is operated as a leaky integrator, whereas HWR runs stably with the conventional integrator control. Using the CANARY telescope simulator we find that the Strehl ratio (SR) obtained with CuReD is slightly higher than that of the traditional least-squares estimator (LSE). We demonstrate that this is because the CuReD algorithm has a smoothing effect on the output wavefront. The SR of HWR is slightly lower than that of LSE. We have tested both reconstructors extensively on-sky. They perform well and CuReD achieves a similar SR as LSE. We compare the CANARY results with those from a computer simulation and find good agreement between the two.

  5. Computing the 3-D structure of viruses from unoriented cryo electron microscope images: a fast algorithm for a statistical approach.

    PubMed

    Lee, Junghoon; Zheng, Yili; Doerschuk, Peter C

    2006-01-01

    In a cryo electron microscopy experiment, the data is noisy 2-D projection images of the 3-D electron scattering intensity where the orientation of the projections is not known. In previous work we have developed a solution for this problem based on a maximum likelihood estimator that is computed by an expectation maximization algorithm. In the expectation maximization algorithm the expensive step is the expectation which requires numerical evaluation of 3- or 5-dimensional integrations of a square matrix of dimension equal to the number of Fourier series coefficients used to describe the 3-D reconstruction. By taking advantage of the rotational properties of spherical harmonics, we can reduce the integrations of a matrix to integrations of a scalar. The key property is that a rotated spherical harmonic can be expressed as a linear combination of the other harmonics of the same order and the weights in the linear combination factor so that each of the three factors is a function of only one of the Euler angles describing the orientation of the projection. Numerical example of the reconstructions is provided based on Nudaurelia Omega Capensis virus.

  6. Fast chromatographic method for the determination of dyes in beverages by using high performance liquid chromatography--diode array detection data and second order algorithms.

    PubMed

    Culzoni, María J; Schenone, Agustina V; Llamas, Natalia E; Garrido, Mariano; Di Nezio, Maria S; Band, Beatriz S Fernández; Goicoechea, Héctor C

    2009-10-16

    A fast chromatographic methodology is presented for the analysis of three synthetic dyes in non-alcoholic beverages: amaranth (E123), sunset yellow FCF (E110) and tartrazine (E102). Seven soft drinks (purchased from a local supermarket) were homogenized, filtered and injected into the chromatographic system. Second order data were obtained by a rapid LC separation and DAD detection. A comparative study of the performance of two second order algorithms (MCR-ALS and U-PLS/RBL) applied to model the data, is presented. Interestingly, the data present time shift between different chromatograms and cannot be conveniently corrected to determine the above-mentioned dyes in beverage samples. This fact originates the lack of trilinearity that cannot be conveniently pre-processed and can hardly be modelled by using U-PLS/RBL algorithm. On the contrary, MCR-ALS has shown to be an excellent tool for modelling this kind of data allowing to reach acceptable figures of merit. Recovery values ranged between 97% and 105% when analyzing artificial and real samples were indicative of the good performance of the method. In contrast with the complete separation, which consumes 10 mL of methanol and 3 mL of 0.08 mol L(-1) ammonium acetate, the proposed fast chromatography method requires only 0.46 mL of methanol and 1.54 mL of 0.08 mol L(-1) ammonium acetate. Consequently, analysis time could be reduced up to 14.2% of the necessary time to perform the complete separation allowing saving both solvents and time, which are related to a reduction of both the costs per analysis and environmental impact.

  7. Fast characterization of moment magnitude and focal mechanism in the context of tsunami warning in the NEAM region : W-phase and PDFM2 algorithms.

    NASA Astrophysics Data System (ADS)

    Schindelé, François; Roch, Julien; Duperray, Pierre; Reymond, Dominique

    2016-04-01

    Over past centuries, several large earthquakes (Mw ≥ 7.5) have been reported in the North East Atlantic and Mediterranenan sea (NEAM) region. Most of the tsunami potential seismic sources in the NEAM region, however, are in a magnitude range of 6.5 ≤ Mw ≤ 7.5 (e.g. tsunami triggered by the earthquake of Boumerdes in 2003 of Mw = 6.9). The CENALT (CENtre d'ALerte aux Tsunamis) in operation since 2012 as the French National Tsunami Warning Centre (NTWC) and Candidate Tsunami Service Provider (CTSP) has to issue warning messages within 15 minutes of the earthquake origin time. The warning level is currently based on a decision matrix depending on the magnitude, and the location of the hypocenter. Two seismic source inversion methods are implemented at CENALT: the W-phase algorithm, based on the so-called W-phase and PDFM2 algorithm , based on the surface waves and first P wave motions. They both give accurate moment magnitude and focal magnitude respectively in 10 min and 20 min. The results of the Mw magnitude, focal depth and type of fault (reverse, normal, strike-slip) are the most relevant parameters used to issue tsunami warnings. In this context, we assess the W-phase and PDFM2 methods with 29 events of magnitude Mw ≥ 5.8 for the period 2010-2015 in the NEAM region. Results with 10 and 20 min for the W-phase algorithm and with 20 and 30 min for the PDFM2 algorithm are compared to the Global Centroid Moment Tensor catalog. The W-phase and PDFM2 methods gives accurate results respectively in 10 min and 20 min. This work is funded by project ASTARTE -- Assessment, Strategy And Risk Reduction for Tsunamis in Europe - FP7-ENV2013 6.4-3, Grant 603839

  8. KID - an algorithm for fast and efficient text mining used to automatically generate a database containing kinetic information of enzymes

    PubMed Central

    2010-01-01

    Background The amount of available biological information is rapidly increasing and the focus of biological research has moved from single components to networks and even larger projects aiming at the analysis, modelling and simulation of biological networks as well as large scale comparison of cellular properties. It is therefore essential that biological knowledge is easily accessible. However, most information is contained in the written literature in an unstructured way, so that methods for the systematic extraction of knowledge directly from the primary literature have to be deployed. Description Here we present a text mining algorithm for the extraction of kinetic information such as KM, Ki, kcat etc. as well as associated information such as enzyme names, EC numbers, ligands, organisms, localisations, pH and temperatures. Using this rule- and dictionary-based approach, it was possible to extract 514,394 kinetic parameters of 13 categories (KM, Ki, kcat, kcat/KM, Vmax, IC50, S0.5, Kd, Ka, t1/2, pI, nH, specific activity, Vmax/KM) from about 17 million PubMed abstracts and combine them with other data in the abstract. A manual verification of approx. 1,000 randomly chosen results yielded a recall between 51% and 84% and a precision ranging from 55% to 96%, depending of the category searched. The results were stored in a database and are available as "KID the KInetic Database" via the internet. Conclusions The presented algorithm delivers a considerable amount of information and therefore may aid to accelerate the research and the automated analysis required for today's systems biology approaches. The database obtained by analysing PubMed abstracts may be a valuable help in the field of chemical and biological kinetics. It is completely based upon text mining and therefore complements manually curated databases. The database is available at http://kid.tu-bs.de. The source code of the algorithm is provided under the GNU General Public Licence and available on

  9. Solving the chemical master equation by a fast adaptive finite state projection based on the stochastic simulation algorithm.

    PubMed

    Sidje, R B; Vo, H D

    2015-11-01

    The mathematical framework of the chemical master equation (CME) uses a Markov chain to model the biochemical reactions that are taking place within a biological cell. Computing the transient probability distribution of this Markov chain allows us to track the composition of molecules inside the cell over time, with important practical applications in a number of areas such as molecular biology or medicine. However the CME is typically difficult to solve, since the state space involved can be very large or even countably infinite. We present a novel way of using the stochastic simulation algorithm (SSA) to reduce the size of the finite state projection (FSP) method. Numerical experiments that demonstrate the effectiveness of the reduction are included.

  10. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    PubMed

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  11. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  12. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  13. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  14. A fast algorithm for parabolic PDE-based inverse problems based on Laplace transforms and flexible Krylov solvers

    SciTech Connect

    Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.

    2015-10-15

    We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.

  15. A fast algorithm for Direct Numerical Simulation of natural convection flows in arbitrarily-shaped periodic domains

    NASA Astrophysics Data System (ADS)

    Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.

    2015-11-01

    A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.

  16. A fast, magnetics-free flux surface estimation and q-profile reconstruction algorithm for feedback control of plasma profiles

    NASA Astrophysics Data System (ADS)

    Hommen, G.; de Baar, M.; Citrin, J.; de Blank, H. J.; Voorhoeve, R. J.; de Bock, M. F. M.; Steinbuch, M.; contributors, JET-EFDA

    2013-02-01

    The flux surfaces' layout and the magnetic winding number q are important quantities for the performance and stability of tokamak plasmas. Normally, these quantities are iteratively derived by solving the plasma equilibrium for the poloidal and toroidal flux. In this work, a fast, non-iterative and magnetics-free numerical method is proposed to estimate the shape of the flux surfaces by an inward propagation of the plasma boundary shape, as can be determined for example by optical boundary reconstruction described in Hommen (2010 Rev. Sci. Instrum. 81 113504), toward the magnetic axis, as can be determined independently with the motional Stark effect (MSE) diagnostic. Flux surfaces are estimated for various plasma regimes in the ITER, JET and MAST tokamaks and are compared with results of CRONOS reconstructions and simulations, showing agreement to within 1% of the minor radius for almost all treated plasmas. The availability of the flux surface shapes combined with the pitch angles measured using MSE allow the reconstruction of the plasma q-profile, by evaluating the contour-integral over the flux surfaces of the magnetic field pitch angle. This method provides a direct and exact measure of the q-profile for arbitrary flux surface shapes, which does not rely on magnetic measurements. Results based on estimated flux surface shapes show agreement with CRONOS q-profiles of better than 10%. The impact of the shape of the flux surfaces on the q-profile, particularly the profiles of elongation and Shafranov shift, and offsets in plasma boundary and the magnetic axis are assessed. OFIT+ was conceived for real-time plasma profile control experiments and advanced tokamak operation, and provides quickly and reliably the mapping of actuators and sensors to the minor radius as well as the plasma q-profile, independent of magnetic measurements.

  17. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  18. Fast Decision Algorithms in Low-Power Embedded Processors for Quality-of-Service Based Connectivity of Mobile Sensors in Heterogeneous Wireless Sensor Networks

    PubMed Central

    Jaraíz-Simón, María D.; Gómez-Pulido, Juan A.; Vega-Rodríguez, Miguel A.; Sánchez-Pérez, Juan M.

    2012-01-01

    When a mobile wireless sensor is moving along heterogeneous wireless sensor networks, it can be under the coverage of more than one network many times. In these situations, the Vertical Handoff process can happen, where the mobile sensor decides to change its connection from a network to the best network among the available ones according to their quality of service characteristics. A fitness function is used for the handoff decision, being desirable to minimize it. This is an optimization problem which consists of the adjustment of a set of weights for the quality of service. Solving this problem efficiently is relevant to heterogeneous wireless sensor networks in many advanced applications. Numerous works can be found in the literature dealing with the vertical handoff decision, although they all suffer from the same shortfall: a non-comparable efficiency. Therefore, the aim of this work is twofold: first, to develop a fast decision algorithm that explores the entire space of possible combinations of weights, searching that one that minimizes the fitness function; and second, to design and implement a system on chip architecture based on reconfigurable hardware and embedded processors to achieve several goals necessary for competitive mobile terminals: good performance, low power consumption, low economic cost, and small area integration. PMID:22438728

  19. Fast decision algorithms in low-power embedded processors for quality-of-service based connectivity of mobile sensors in heterogeneous wireless sensor networks.

    PubMed

    Jaraíz-Simón, María D; Gómez-Pulido, Juan A; Vega-Rodríguez, Miguel A; Sánchez-Pérez, Juan M

    2012-01-01

    When a mobile wireless sensor is moving along heterogeneous wireless sensor networks, it can be under the coverage of more than one network many times. In these situations, the Vertical Handoff process can happen, where the mobile sensor decides to change its connection from a network to the best network among the available ones according to their quality of service characteristics. A fitness function is used for the handoff decision, being desirable to minimize it. This is an optimization problem which consists of the adjustment of a set of weights for the quality of service. Solving this problem efficiently is relevant to heterogeneous wireless sensor networks in many advanced applications. Numerous works can be found in the literature dealing with the vertical handoff decision, although they all suffer from the same shortfall: a non-comparable efficiency. Therefore, the aim of this work is twofold: first, to develop a fast decision algorithm that explores the entire space of possible combinations of weights, searching that one that minimizes the fitness function; and second, to design and implement a system on chip architecture based on reconfigurable hardware and embedded processors to achieve several goals necessary for competitive mobile terminals: good performance, low power consumption, low economic cost, and small area integration.

  20. A call center primer.

    PubMed

    Durr, W

    1998-01-01

    Call centers are strategically and tactically important to many industries, including the healthcare industry. Call centers play a key role in acquiring and retaining customers. The ability to deliver high-quality and timely customer service without much expense is the basis for the proliferation and expansion of call centers. Call centers are unique blends of people and technology, where performance indicates combining appropriate technology tools with sound management practices built on key operational data. While the technology is fascinating, the people working in call centers and the skill of the management team ultimately make a difference to their companies.

  1. Faster Algorithms on Branch and Clique Decompositions

    NASA Astrophysics Data System (ADS)

    Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin

    We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.

  2. The Power of CALL.

    ERIC Educational Resources Information Center

    Pennington, Martha C., Ed.

    The book provides an overview of Computer-Assisted Language Learning (CALL) written by specialists in specific areas of electronic media. Its nine chapters include: "The Power of the Computer in Language Education" (Martha C. Pennington); "Elements of CALL Methodology: Development, Evaluation, and Implementation" (Philip L. Hubbard); "Second…

  3. Compare Gene Calls

    SciTech Connect

    Ecale Zhou, Carol L.

    2016-07-05

    Compare Gene Calls (CGC) is a Python code used for combining and comparing gene calls from any number of gene callers. A gene caller is a computer program that predicts the extends of open reading frames within genomes of biological organisms.

  4. Callings and Organizational Behavior

    ERIC Educational Resources Information Center

    Elangovan, A. R.; Pinder, Craig C.; McLean, Murdith

    2010-01-01

    Current literature on careers, social identity and meaning in work tends to understate the multiplicity, historical significance, and nuances of the concept of calling(s). In this article, we trace the evolution of the concept from its religious roots into secular realms and develop a typology of interpretations using occupation and religious…

  5. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  6. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Experimental Validation.

    PubMed

    Hyodo, Tomoko; Hori, Masatoshi; Lamb, Peter; Sasaki, Kosuke; Wakayama, Tetsuya; Chiba, Yasutaka; Mochizuki, Teruhito; Murakami, Takamichi

    2017-02-01

    Purpose To assess the ability of fast-kilovolt-peak switching dual-energy computed tomography (CT) by using the multimaterial decomposition (MMD) algorithm to quantify liver fat. Materials and Methods Fifteen syringes that contained various proportions of swine liver obtained from an abattoir, lard in food products, and iron (saccharated ferric oxide) were prepared. Approval of this study by the animal care and use committee was not required. Solid cylindrical phantoms that consisted of a polyurethane epoxy resin 20 and 30 cm in diameter that held the syringes were scanned with dual- and single-energy 64-section multidetector CT. CT attenuation on single-energy CT images (in Hounsfield units) and MMD-derived fat volume fraction (FVF; dual-energy CT FVF) were obtained for each syringe, as were magnetic resonance (MR) spectroscopy measurements by using a 1.5-T imager (fat fraction [FF] of MR spectroscopy). Reference values of FVF (FVFref) were determined by using the Soxhlet method. Iron concentrations were determined by inductively coupled plasma optical emission spectroscopy and divided into three ranges (0 mg per 100 g, 48.1-55.9 mg per 100 g, and 92.6-103.0 mg per 100 g). Statistical analysis included Spearman rank correlation and analysis of covariance. Results Both dual-energy CT FVF (ρ = 0.97; P < .001) and CT attenuation on single-energy CT images (ρ = -0.97; P < .001) correlated significantly with FVFref for phantoms without iron. Phantom size had a significant effect on dual-energy CT FVF after controlling for FVFref (P < .001). The regression slopes for CT attenuation on single-energy CT images in 20- and 30-cm-diameter phantoms differed significantly (P = .015). In sections with higher iron concentrations, the linear coefficients of dual-energy CT FVF decreased and those of MR spectroscopy FF increased (P < .001). Conclusion Dual-energy CT FVF allows for direct quantification of fat content in units of volume percent. Dual-energy CT FVF was larger in 30

  7. Enhanced nurse call systems.

    PubMed

    2001-04-01

    This Evaluation focuses on high-end computerized nurse call systems--what we call enhanced systems. These are highly flexible systems that incorporate microprocessor and communications technologies to expand the capabilities of the nurse call function. Enhanced systems, which vary in configuration from one installation to the next, typically consist of a basic system that provides standard nurse call functionality and a combination of additional enhancements that provide the added functionality the facility desires. In this study, we examine the features that distinguish enhanced nurse call systems from nonenhanced systems, focusing on their application and benefit to healthcare facilities. We evaluated seven systems to determine how well they help (1) improve patient care, as well as increase satisfaction with the care provided, and (2) improve caregiver efficiency, as well as increase satisfaction with the work environment. We found that all systems meet these objectives, but not all systems perform equally well for all implementations. Our ratings will help facilities identify those systems that offer the most effective features for their intended use. The study also includes a Technology Management Guide to help readers (1) determine whether they'll benefit from the capabilities offered by enhanced systems and (2) target a system for purchase and equip the system for optimum performance and cost-effective operation.

  8. When lawyers call clinicians.

    PubMed

    Reid, William H

    2010-07-01

    Every psychiatrist, psychologist, and psychotherapist gets calls from attorneys from time to time, often with a request involving a patient. Patients sometimes ask their clinicians to become involved in their legal matters. Such calls and requests may sound straightforward, but they are often misleading, incomplete, or misunderstood. One should avoid being reflexively "helpful" when a lawyer calls or a patient makes such a special request. There may be no obligation to respond, or to respond immediately, although subpoenas must not be ignored; promptly contacting an appropriate supervisor, facility risk manager, malpractice insurance carrier, or one's own attorney is often the best course of action. Office staff such as secretaries and receptionists should also be trained and cautioned regarding the principles discussed here.

  9. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  10. Artificial Intelligence and CALL.

    ERIC Educational Resources Information Center

    Underwood, John H.

    The potential application of artificial intelligence (AI) to computer-assisted language learning (CALL) is explored. Two areas of AI that hold particular interest to those who deal with language meaning--knowledge representation and expert systems, and natural-language processing--are described and examples of each are presented. AI contribution…

  11. Wake-Up Call.

    ERIC Educational Resources Information Center

    Sartorius, Tara Cady

    2002-01-01

    Focuses on the artist, Laquita Thomson, whose inspiration are the stars and space. Discusses her series called, "Celestial Happenings: Stars Fell on Alabama." Describes one event that inspired an art work when a meteor crashed into an Alabama home. Includes lessons for various subject areas. (CMK)

  12. A Call for Professionalism.

    ERIC Educational Resources Information Center

    Shanker, Albert

    Albert Shanker, President of the American Federation of Teachers (AFT), speaks about the national testing of teachers and calls for the creation of a new and better national examination for new teachers. While members of the AFT have a few differences with some of the current reform proposals, the AFT in general supports the overwhelming majority…

  13. When Crises Call

    ERIC Educational Resources Information Center

    Kisch, Marian

    2012-01-01

    Natural disasters, as well as crises of the man-made variety, call on leaders of school districts to manage scenarios impossible to predict and for which no amount of training can adequately prepare. One thing all major crises hold in common is their far-reaching effects, which can run the gamut from personal safety and mental well-being to the…

  14. Fast clustering algorithm for large ECG data sets based on CS theory in combination with PCA and K-NN methods.

    PubMed

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2014-01-01

    Long-term recording of Electrocardiogram (ECG) signals plays an important role in health care systems for diagnostic and treatment purposes of heart diseases. Clustering and classification of collecting data are essential parts for detecting concealed information of P-QRS-T waves in the long-term ECG recording. Currently used algorithms do have their share of drawbacks: 1) clustering and classification cannot be done in real time; 2) they suffer from huge energy consumption and load of sampling. These drawbacks motivated us in developing novel optimized clustering algorithm which could easily scan large ECG datasets for establishing low power long-term ECG recording. In this paper, we present an advanced K-means clustering algorithm based on Compressed Sensing (CS) theory as a random sampling procedure. Then, two dimensionality reduction methods: Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) followed by sorting the data using the K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers are applied to the proposed algorithm. We show our algorithm based on PCA features in combination with K-NN classifier shows better performance than other methods. The proposed algorithm outperforms existing algorithms by increasing 11% classification accuracy. In addition, the proposed algorithm illustrates classification accuracy for K-NN and PNN classifiers, and a Receiver Operating Characteristics (ROC) area of 99.98%, 99.83%, and 99.75% respectively.

  15. The Rotated Speeded-Up Robust Features Algorithm (R-SURF) (CD-ROM)

    DTIC Science & Technology

    Weaknesses in the Fast Hessian detector utilized by the speeded-up robust features ( SURF ) algorithm are examined in this research. We evaluate the SURF ...algorithm to identify possible areas for improvement in the performance. A proposed alternative to the SURF detector is proposed called rotated SURF (R- SURF ...against the regular SURF detector. Performance testing shows that the R- SURF outperforms the regular SURF detector when subject to image blurring

  16. ALG: automated genotype calling of Luminex assays.

    PubMed

    Bourgey, Mathieu; Lariviere, Mathieu; Richer, Chantal; Sinnett, Daniel

    2011-05-06

    Single nucleotide polymorphisms (SNPs) are the most commonly used polymorphic markers in genetics studies. Among the different platforms for SNP genotyping, Luminex is one of the less exploited mainly due to the lack of a robust (semi-automated and replicable) freely available genotype calling software. Here we describe a clustering algorithm that provides automated SNP calls for Luminex genotyping assays. We genotyped 3 SNPs in a cohort of 330 childhood leukemia patients, 200 parents of patient and 325 healthy individuals and used the Automated Luminex Genotyping (ALG) algorithm for SNP calling. ALG genotypes were called twice to test for reproducibility and were compared to sequencing data to test for accuracy. Globally, this analysis demonstrates the accuracy (99.6%) of the method, its reproducibility (99.8%) and the low level of no genotyping calls (3.4%). The high efficiency of the method proves that ALG is a suitable alternative to the current commercial software. ALG is semi-automated, and provides numerical measures of confidence for each SNP called, as well as an effective graphical plot. Moreover ALG can be used either through a graphical user interface, requiring no specific informatics knowledge, or through command line with access to the open source code. The ALG software has been implemented in R and is freely available for non-commercial use either at http://alg.sourceforge.net or by request to mathieu.bourgey@umontreal.ca.

  17. Automated call tracking systems

    SciTech Connect

    Hardesty, C.

    1993-03-01

    User Services groups are on the front line with user support. We are the first to hear about problems. The speed, accuracy, and intelligence with which we respond determines the user`s perception of our effectiveness and our commitment to quality and service. To keep pace with the complex changes at our sites, we must have tools to help build a knowledge base of solutions, a history base of our users, and a record of every problem encountered. Recently, I completed a survey of twenty sites similar to the National Energy Research Supercomputer Center (NERSC). This informal survey reveals that 27% of the sites use a paper system to log calls, 60% employ homegrown automated call tracking systems, and 13% use a vendor-supplied system. Fifty-four percent of those using homegrown systems are exploring the merits of switching to a vendor-supplied system. The purpose of this paper is to provide guidelines for evaluating a call tracking system. In addition, insights are provided to assist User Services groups in selecting a system that fits their needs.

  18. How Fast Is Fast?

    ERIC Educational Resources Information Center

    Korn, Abe

    1994-01-01

    Presents an activity that enables students to answer for themselves the question of how fast a body must travel before the nonrelativistic expression must be replaced with the correct relativistic expression by deciding on the accuracy required in describing the kinetic energy of a body. (ZWH)

  19. Multiple One-Dimensional Search (MODS) algorithm for fast optimization of laser-matter interaction by phase-only fs-laser pulse shaping

    NASA Astrophysics Data System (ADS)

    Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Solis, J.

    2014-09-01

    In this work, we have developed and implemented a powerful search strategy for optimization of nonlinear optical effects by means of femtosecond pulse shaping, based on topological concepts derived from quantum control theory. Our algorithm [Multiple One-Dimensional Search (MODS)] is based on deterministic optimization of a single solution rather than pseudo-random optimization of entire populations as done by commonly used evolutionary algorithms. We have tested MODS against a genetic algorithm in a nontrivial problem consisting in optimizing the Kerr gating signal (self-interaction) of a shaped laser pulse in a detuned Michelson interferometer configuration. The obtained results show that our search method (MODS) strongly outperforms the genetic algorithm in terms of both convergence speed and quality of the solution. These findings demonstrate the applicability of concepts of quantum control theory to nonlinear laser-matter interaction problems, even in the presence of significant experimental noise.

  20. A Fast Hermite Transform.

    PubMed

    Leibon, Gregory; Rockmore, Daniel N; Park, Wooram; Taintor, Robert; Chirikjian, Gregory S

    2008-12-17

    We present algorithms for fast and stable approximation of the Hermite transform of a compactly supported function on the real line, attainable via an application of a fast algebraic algorithm for computing sums associated with a three-term relation. Trade-offs between approximation in bandlimit (in the Hermite sense) and size of the support region are addressed. Numerical experiments are presented that show the feasibility and utility of our approach. Generalizations to any family of orthogonal polynomials are outlined. Applications to various problems in tomographic reconstruction, including the determination of protein structure, are discussed.

  1. Optimized Seizure Detection Algorithm: A Fast Approach for Onset of Epileptic in EEG Signals Using GT Discriminant Analysis and K-NN Classifier

    PubMed Central

    Rezaee, Kh.; Azizi, E.; Haddadnia, J.

    2016-01-01

    Background Epilepsy is a severe disorder of the central nervous system that predisposes the person to recurrent seizures. Fifty million people worldwide suffer from epilepsy; after Alzheimer’s and stroke, it is the third widespread nervous disorder. Objective In this paper, an algorithm to detect the onset of epileptic seizures based on the analysis of brain electrical signals (EEG) has been proposed. 844 hours of EEG were recorded form 23 pediatric patients consecutively with 163 occurrences of seizures. Signals had been collected from Children’s Hospital Boston with a sampling frequency of 256 Hz through 18 channels in order to assess epilepsy surgery. By selecting effective features from seizure and non-seizure signals of each individual and putting them into two categories, the proposed algorithm detects the onset of seizures quickly and with high sensitivity. Method In this algorithm, L-sec epochs of signals are displayed in form of a third-order tensor in spatial, spectral and temporal spaces by applying wavelet transform. Then, after applying general tensor discriminant analysis (GTDA) on tensors and calculating mapping matrix, feature vectors are extracted. GTDA increases the sensitivity of the algorithm by storing data without deleting them. Finally, K-Nearest neighbors (KNN) is used to classify the selected features. Results The results of simulating algorithm on algorithm standard dataset shows that the algorithm is capable of detecting 98 percent of seizures with an average delay of 4.7 seconds and the average error rate detection of three errors in 24 hours. Conclusion Today, the lack of an automated system to detect or predict the seizure onset is strongly felt. PMID:27672628

  2. Weighted MinMax Algorithm for Color Image Quantization

    NASA Technical Reports Server (NTRS)

    Reitan, Paula J.

    1999-01-01

    The maximum intercluster distance and the maximum quantization error that are minimized by the MinMax algorithm are shown to be inappropriate error measures for color image quantization. A fast and effective (improves image quality) method for generalizing activity weighting to any histogram-based color quantization algorithm is presented. A new non-hierarchical color quantization technique called weighted MinMax that is a hybrid between the MinMax and Linde-Buzo-Gray (LBG) algorithms is also described. The weighted MinMax algorithm incorporates activity weighting and seeks to minimize WRMSE, whereby obtaining high quality quantized images with significantly less visual distortion than the MinMax algorithm.

  3. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data

    PubMed Central

    Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018

  4. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data.

    PubMed

    Ekberg, Peter; Su, Rong; Chang, Ernest W; Yun, Seok Hyun; Mattsson, Lars

    2014-02-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 μm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness.

  5. Fast-earth: A global image caching architecture for fast access to remote-sensing data

    NASA Astrophysics Data System (ADS)

    Talbot, B. G.; Talbot, L. M.

    We introduce Fast-Earth, a novel server architecture that enables rapid access to remote sensing data. Fast-Earth subdivides a WGS-84 model of the earth into small 400 × 400 meter regions with fixed locations, called plats. The resulting 3,187,932,913 indexed plats are accessed with a rapid look-up algorithm. Whereas many traditional databases store large original images as a series by collection time, requiring long searches and slow access times for user queries, the Fast-Earth architecture enables rapid access. We have prototyped a system in conjunction with a Fast-Responder mobile app to demonstrate and evaluate the concepts. We found that new data could be indexed rapidly in about 10 minutes/terabyte, high-resolution images could be chipped in less than a second, and 250 kB image chips could be delivered over a 3G network in about 3 seconds. The prototype server implemented on a very small computer could handle 100 users, but the concept is scalable. Fast-Earth enables dramatic advances in rapid dissemination of remote sensing data for mobile platforms as well as desktop enterprises.

  6. [Who is Called "Schizophrenic"?].

    PubMed

    Azorin, Jean-Michel; Kaladjian, Arthur; Besnier, Nathalie; Cermolacce, Michel

    2008-01-01

    Someone is called "schizophrenic" when suffering from a disorder described in 1911, for the first time by a Swiss psychiatrist Eugen Bleuler in a book entitled "Dementia Praecox oder Gruppe der Schizophrenien". In this book, Bleuler proposes a two-faced approach: one centered on the disease, the other on the person. Bleuler's main contribution was to show the importance of the latter in the determinism of clinical pictures and illness course, thus opening the way to more anthropological approaches to the schizophrenic self. Taking these approaches into account, at a time when naturalistic models of the illness are prevailing, is far from being of no consequence, as far as the effectiveness of our therapeutic actions is at issue.

  7. Fast wavelet estimation of weak biosignals.

    PubMed

    Causevic, Elvir; Morley, Robert E; Wickerhauser, M Victor; Jacquin, Arnaud E

    2005-06-01

    Wavelet-based signal processing has become commonplace in the signal processing community over the past decade and wavelet-based software tools and integrated circuits are now commercially available. One of the most important applications of wavelets is in removal of noise from signals, called denoising, accomplished by thresholding wavelet coefficients in order to separate signal from noise. Substantial work in this area was summarized by Donoho and colleagues at Stanford University, who developed a variety of algorithms for conventional denoising. However, conventional denoising fails for signals with low signal-to-noise ratio (SNR). Electrical signals acquired from the human body, called biosignals, commonly have below 0 dB SNR. Synchronous linear averaging of a large number of acquired data frames is universally used to increase the SNR of weak biosignals. A novel wavelet-based estimator is presented for fast estimation of such signals. The new estimation algorithm provides a faster rate of convergence to the underlying signal than linear averaging. The algorithm is implemented for processing of auditory brainstem response (ABR) and of auditory middle latency response (AMLR) signals. Experimental results with both simulated data and human subjects demonstrate that the novel wavelet estimator achieves superior performance to that of linear averaging.

  8. Fast electrostatic force calculation on parallel computer clusters

    SciTech Connect

    Kia, Amirali Kim, Daejoong Darve, Eric

    2008-10-01

    The fast multipole method (FMM) and smooth particle mesh Ewald (SPME) are well known fast algorithms to evaluate long range electrostatic interactions in molecular dynamics and other fields. FMM is a multi-scale method which reduces the computation cost by approximating the potential due to a group of particles at a large distance using few multipole functions. This algorithm scales like O(N) for N particles. SPME algorithm is an O(NlnN) method which is based on an interpolation of the Fourier space part of the Ewald sum and evaluating the resulting convolutions using fast Fourier transform (FFT). Those algorithms suffer from relatively poor efficiency on large parallel machines especially for mid-size problems around hundreds of thousands of atoms. A variation of the FMM, called PWA, based on plane wave expansions is presented in this paper. A new parallelization strategy for PWA, which takes advantage of the specific form of this expansion, is described. Its parallel efficiency is compared with SPME through detail time measurements on two different computer clusters.

  9. A fast algorithm for adaptive clutter rejection in ultrasound color flow imaging based on the first-order perturbation: a simulation study.

    PubMed

    You, Wei; Wang, Yuanyuan

    2010-08-01

    A fast clutter rejection method for ultrasound color flow imaging is proposed based on the first-order perturbation as an efficient implementation of eigen-decomposition. The proposed method is verified by simulated data. Results show that the proposed method can be adaptive to non-stationary clutter movements and its computational complexity is lower than that of the conventional eigen-based clutter rejection methods.

  10. RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.

  11. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  12. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Clinical Evaluation.

    PubMed

    Hyodo, Tomoko; Yada, Norihisa; Hori, Masatoshi; Maenishi, Osamu; Lamb, Peter; Sasaki, Kosuke; Onoda, Minori; Kudo, Masatoshi; Mochizuki, Teruhito; Murakami, Takamichi

    2017-04-01

    Purpose To assess the clinical accuracy and reproducibility of liver fat quantification with the multimaterial decomposition (MMD) algorithm, comparing the performance of MMD with that of magnetic resonance (MR) spectroscopy by using liver biopsy as the reference standard. Materials and Methods This prospective study was approved by the institutional ethics committee, and patients provided written informed consent. Thirty-three patients suspected of having hepatic steatosis underwent non-contrast material-enhanced and triple-phase dynamic contrast-enhanced dual-energy computed tomography (CT) (80 and 140 kVp) and single-voxel proton MR spectroscopy within 30 days before liver biopsy. Percentage fat volume fraction (FVF) images were generated by using the MMD algorithm on dual-energy CT data to measure hepatic fat content. FVFs determined by using dual-energy CT and percentage fat fractions (FFs) determined by using MR spectroscopy were compared with histologic steatosis grade (0-3, as defined by the nonalcoholic fatty liver disease activity score system) by using Jonckheere-Terpstra trend tests and were compared with each other by using Bland-Altman analysis. Real non-contrast-enhanced FVFs were compared with triple-phase contrast-enhanced FVFs to determine the reproducibility of MMD by using Bland-Altman analyses. Results Both dual-energy CT FVF and MR spectroscopy FF increased with increasing histologic steatosis grade (trend test, P < .001 for each). The Bland-Altman plot of dual-energy CT FVF and MR spectroscopy FF revealed a proportional bias, as indicated by the significant positive slope of the line regressing the difference on the average (P < .001). The 95% limits of agreement for the differences between real non-contrast-enhanced and contrast-enhanced FVFs were not greater than about 2%. Conclusion The MMD algorithm quantifying hepatic fat in dual-energy CT images is accurate and reproducible across imaging phases. (©) RSNA, 2017 Online supplemental

  13. Interior segment regrowth configurational-bias algorithm for the efficient sampling and fast relaxation of coarse-grained polyethylene and polyoxyethylene melts on a high coordination lattice

    NASA Astrophysics Data System (ADS)

    Rane, Sagar S.; Mattice, Wayne L.

    2005-06-01

    We demonstrate the application of a modified form of the configurational-bias algorithm for the simulation of chain molecules on the second-nearest-neighbor-diamond lattice. Using polyethylene and poly(ethylene-oxide) as model systems we show that the present configurational-bias algorithm can increase the speed of the equilibration by at least a factor of 2-3 or more as compared to the previous method of using a combination of single-bead and pivot moves along with the Metropolis sampling scheme [N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, J. Chem. Phys. 21, 1087 (1953)]. The increase in the speed of the equilibration is found to be dependent on the interactions (i.e., the polymer being simulated) and the molecular weight of the chains. In addition, other factors not considered, such as the density, would also have a significant effect. The algorithm is an extension of the conventional configurational-bias method adapted to the regrowth of interior segments of chain molecules. Appropriate biasing probabilities for the trial moves as outlined by Jain and de Pablo for the configurational-bias scheme of chain ends, suitably modified for the interior segments, are utilized [T. S. Jain and J. J. de Pablo, in Simulation Methods for Polymers, edited by M. Kotelyanskii and D. N. Theodorou (Marcel Dekker, New York, 2004), pp. 223-255]. The biasing scheme satisfies the condition of detailed balance and produces efficient sampling with the correct equilibrium probability distribution of states. The method of interior regrowth overcomes the limitations of the original configurational-bias scheme and allows for the simulation of polymers of higher molecular weight linear chains and ring polymers which lack chain ends.

  14. Fast Moreau envelope computation I

    NASA Astrophysics Data System (ADS)

    Lucet, Yves

    2006-11-01

    The present article summarizes the state of the art algorithms to compute the discrete Moreau envelope, and presents a new linear-time algorithm, named NEP for NonExpansive Proximal mapping. Numerical comparisons between the NEP and two existing algorithms: The Linear-time Legendre Transform (LLT) and the Parabolic Envelope (PE) algorithms are performed. Worst-case time complexity, convergence results, and examples are included. The fast Moreau envelope algorithms first factor the Moreau envelope as several one-dimensional transforms and then reduce the brute force quadratic worst-case time complexity to linear time by using either the equivalence with Fast Legendre Transform algorithms, the computation of a lower envelope of parabolas, or, in the convex case, the non expansiveness of the proximal mapping.

  15. Fast, efficient lossless data compression

    NASA Technical Reports Server (NTRS)

    Ross, Douglas

    1991-01-01

    This paper presents lossless data compression and decompression algorithms which can be easily implemented in software. The algorithms can be partitioned into their fundamental parts which can be implemented at various stages within a data acquisition system. This allows for efficient integration of these functions into systems at the stage where they are most applicable. The algorithms were coded in Forth to run on a Silicon Composers Single Board Computer (SBC) using the Harris RTX2000 Forth processor. The algorithms require very few system resources and operate very fast. The performance of the algorithms with the RTX enables real time data compression and decompression to be implemented for a wide range of applications.

  16. Exploring single-sample SNP and INDEL calling with whole-genome de novo assembly

    PubMed Central

    Li, Heng

    2012-01-01

    Motivation: Eugene Myers in his string graph paper suggested that in a string graph or equivalently a unitig graph, any path spells a valid assembly. As a string/unitig graph also encodes every valid assembly of reads, such a graph, provided that it can be constructed correctly, is in fact a lossless representation of reads. In principle, every analysis based on whole-genome shotgun sequencing (WGS) data, such as SNP and insertion/deletion (INDEL) calling, can also be achieved with unitigs. Results: To explore the feasibility of using de novo assembly in the context of resequencing, we developed a de novo assembler, fermi, that assembles Illumina short reads into unitigs while preserving most of information of the input reads. SNPs and INDELs can be called by mapping the unitigs against a reference genome. By applying the method on 35-fold human resequencing data, we showed that in comparison to the standard pipeline, our approach yields similar accuracy for SNP calling and better results for INDEL calling. It has higher sensitivity than other de novo assembly based methods for variant calling. Our work suggests that variant calling with de novo assembly can be a beneficial complement to the standard variant calling pipeline for whole-genome resequencing. In the methodological aspects, we propose FMD-index for forward–backward extension of DNA sequences, a fast algorithm for finding all super-maximal exact matches and one-pass construction of unitigs from an FMD-index. Availability: http://github.com/lh3/fermi Contact: hengli@broadinstitute.org PMID:22569178

  17. QuateXelero: An Accelerated Exact Network Motif Detection Algorithm

    PubMed Central

    Khakabimamaghani, Sahand; Sharafuddin, Iman; Dichter, Norbert; Koch, Ina; Masoudi-Nejad, Ali

    2013-01-01

    Finding motifs in biological, social, technological, and other types of networks has become a widespread method to gain more knowledge about these networks’ structure and function. However, this task is very computationally demanding, because it is highly associated with the graph isomorphism which is an NP problem (not known to belong to P or NP-complete subsets yet). Accordingly, this research is endeavoring to decrease the need to call NAUTY isomorphism detection method, which is the most time-consuming step in many existing algorithms. The work provides an extremely fast motif detection algorithm called QuateXelero, which has a Quaternary Tree data structure in the heart. The proposed algorithm is based on the well-known ESU (FANMOD) motif detection algorithm. The results of experiments on some standard model networks approve the overal superiority of the proposed algorithm, namely QuateXelero, compared with two of the fastest existing algorithms, G-Tries and Kavosh. QuateXelero is especially fastest in constructing the central data structure of the algorithm from scratch based on the input network. PMID:23874498

  18. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  19. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  20. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  1. Leader selection for fast consensus in networks

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Yang, Wen; Wang, Lin; Wang, Xiaofan

    2015-12-01

    This paper considers a leader-follower system with the aim to select an optimal leader so as to drive the remaining nodes to reach the desired consensus with the fastest convergence speed. An index called consensus centrality (CC) is proposed to quantify how fast a leader could guide the network to achieve the desired consensus. The experiment results explored the big similarities between the distributions of CC and degree in the network, which suggest that the suboptimal leader selected by the maximum degree can approximately approach the optimal leader in heterogeneous networks. Combining the degree-based k-shell decomposition with consensus centrality, a leader selection algorithm is proposed to reduce the computational complexity in large-scale networks. Finally, the convergence time of an equivalent discrete-time model is given to illustrate the properties of the suboptimal solutions.

  2. Fast Reactors

    NASA Astrophysics Data System (ADS)

    Esposito, S.; Pisanti, O.

    The following sections are included: * Elementary Considerations * The Integral Equation to the Neutron Distribution * The Critical Size for a Fast Reactor * Supercritical Reactors * Problems and Exercises

  3. FAST Conformational Searches by Balancing Exploration/Exploitation Trade-Offs.

    PubMed

    Zimmerman, Maxwell I; Bowman, Gregory R

    2015-12-08

    Molecular dynamics simulations are a powerful means of understanding conformational changes. However, it is still difficult to simulate biologically relevant time scales without the use of specialized supercomputers. Here, we introduce a goal-oriented sampling method, called fluctuation amplification of specific traits (FAST), for extending the capabilities of commodity hardware. This algorithm rapidly searches conformational space for structures with desired properties by balancing trade-offs between focused searches around promising solutions (exploitation) and trying novel solutions (exploration). FAST was inspired by the hypothesis that many physical properties have an overall gradient in conformational space, akin to the energetic gradients that are known to guide proteins to their folded states. For example, we expect that transitioning from a conformation with a small solvent-accessible surface area to one with a large surface area will require passing through a series of conformations with steadily increasing surface areas. We demonstrate that such gradients are common through retrospective analysis of existing Markov state models (MSMs). Then we design the FAST algorithm to exploit these gradients to find structures with desired properties by (1) recognizing and amplifying structural fluctuations along gradients that optimize a selected physical property whenever possible, (2) overcoming barriers that interrupt these overall gradients, and (3) rerouting to discover alternative paths when faced with insurmountable barriers. To test FAST, we compare its performance to other methods for three common types of problems: (1) identifying unexpected binding pockets, (2) discovering the preferred paths between specific structures, and (3) folding proteins. Our conservative estimate is that FAST outperforms conventional simulations and an adaptive sampling algorithm by at least an order of magnitude. Furthermore, FAST yields both the proper thermodynamics and

  4. An improved direction finding algorithm based on Toeplitz approximation.

    PubMed

    Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao

    2013-01-07

    In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.

  5. Optimized multilevel codebook searching algorithm for vector quantization in image coding

    NASA Astrophysics Data System (ADS)

    Cao, Hugh Q.; Li, Weiping

    1996-02-01

    An optimized multi-level codebook searching algorithm (MCS) for vector quantization is presented in this paper. Although it belongs to the category of the fast nearest neighbor searching (FNNS) algorithms for vector quantization, the MCS algorithm is not a variation of any existing FNNS algorithms (such as k-d tree searching algorithm, partial-distance searching algorithm, triangle inequality searching algorithm...). A multi-level search theory has been introduced. The problem for the implementation of this theory has been solved by a specially defined irregular tree structure which can be built from a training set. This irregular tree structure is different from any tree structures used in TSVQ, prune tree VQ, quad tree VQ... Strictly speaking, it cannot be called tree structure since it allows one node has more than one set of parents, it is only a directed graph. This is the essential difference between MCS algorithm and other TSVQ algorithms which ensures its better performance. An efficient design procedure has been given to find the optimized irregular tree for practical source. The simulation results of applying MCS algorithm to image VQ show that this algorithm can reduce searching complexity to less than 3% of the exhaustive search vector quantization (ESVQ) (4096 codevectors and 16 dimension) while introducing negligible error (0.064 dB degradation from ESVQ). Simulation results also show that the searching complexity is close linearly increase with bitrate.

  6. Fast CRCs

    DTIC Science & Technology

    2009-10-01

    Detecting Codes: General Theory and Their Application in Feedback Communication Systems. Kluwer Academic, 1995. [8] D.E. Knuth , The Art of Computer ... computation . Index Terms—Fast CRC, low-complexity CRC, checksum, error-detection code, Hamming code, period of polynomial, fast software implementation...simulations, and performance analysis of systems and networks. CRC implementation in software is desirable, because many computers do not have hardware

  7. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  8. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  9. Fast Steerable Principal Component Analysis.

    PubMed

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-03-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL(3) + L(4)), while existing algorithms take O(nL(4)). The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.

  10. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction

    PubMed Central

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining. PMID:26751200

  11. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.

  12. Fast Parallel Computation Of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory L.; Bagherzadeh, Nader

    1996-01-01

    Constraint-force algorithm fast, efficient, parallel-computation algorithm for solving forward dynamics problem of multibody system like robot arm or vehicle. Solves problem in minimum time proportional to log(N) by use of optimal number of processors proportional to N, where N is number of dynamical degrees of freedom: in this sense, constraint-force algorithm both time-optimal and processor-optimal parallel-processing algorithm.

  13. Fast polyhedral cell sorting for interactive rendering of unstructured grids

    SciTech Connect

    Combra, J; Klosowski, J T; Max, N; Silva, C T; Williams, P L

    1998-10-30

    Direct volume rendering based on projective methods works by projecting, in visibility order, the polyhedral cells of a mesh onto the image plane, and incrementally compositing the cell's color and opacity into the final image. Crucial to this method is the computation of a visibility ordering of the cells. If the mesh is ''well-behaved'' (acyclic and convex), then the MPVO method of Williams provides a very fast sorting algorithm; however, this method only computes an approximate ordering in general datasets, resulting in visual artifacts when rendered. A recent method of Silva et al. removed the assumption that the mesh is convex, by means of a sweep algorithm used in conjunction with the MPVO method; their algorithm is substantially faster than previous exact methods for general meshes. In this paper we propose a new technique, which we call BSP-XMPVO, which is based on a fast and simple way of using binary space partitions on the boundary elements of the mesh to augment the ordering produced by MPVO. Our results are shown to be orders of magnitude better than previous exact methods of sorting cells.

  14. CALL Essentials: Principles and Practice in CALL Classrooms

    ERIC Educational Resources Information Center

    Egbert, Joy

    2005-01-01

    Computers and the Internet offer innovative teachers exciting ways to enhance their pedagogy and capture their students' attention. These technologies have created a growing field of inquiry, computer-assisted language learning (CALL). As new technologies have emerged, teaching professionals have adapted them to support teachers and learners in…

  15. Fast Parallel Algorithms for Graphs and Networks

    DTIC Science & Technology

    1987-12-01

    Gibbons . His diligence, ideas and eye for detail were instrumen tal in getting results. .My first collaborator in Berkeley was Howard Karloff (ou r...messages. One such design is the Caltech Cosmic Cu be upon which some commercial computers, such as the Intel iPSC , ar e based ([SASLMW]). Several

  16. Fast Array Algorithms for Structured Matrices

    DTIC Science & Technology

    1989-06-01

    matrices and operators, Akademie-Verlag, Berlin, 1984. [111. T. Kailath , Linear Systems , Prentice-Hall, Englewood Cliffs, New Jersey, 1980. [121. T... Linear Systems Prentice-Hall, Englewood Cliffs, New Jersey, 1980. [131. T. Kailath, Signal processing in the VLSI era, VLSI and Modem Signal Processing...vol 5, No. 1., (1984), pp. 237-254. [11]. F. Gantmacher The theory of matrices, vol. 2, Chelsea Publishing Comp., New York, 1960. [121. T. Kailath

  17. Fast Algorithms for Hybrid Control System Design

    DTIC Science & Technology

    2007-11-02

    behaviour of a fossil fuel electric power generating plant . A final discussion will be found in section 6. 2 Modular Neural Networks This paper...experiments use data from a fossil fuel burning, electric power generating plant . 5.1 Mixture density parameter estimation The first collection of...models (M) Figure 24: Worst case error (N = 750) 34 burning electric power generating plant (see Figure 25). Two sets of data consisting of input

  18. Fast parallel algorithm for CT image reconstruction.

    PubMed

    Flores, Liubov A; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2012-01-01

    In X-ray computed tomography (CT) the X rays are used to obtain the projection data needed to generate an image of the inside of an object. The image can be generated with different techniques. Iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions and from a small number of projections. Their use may be important in portable scanners for their functionality in emergency situations. However, in practice, these methods are not widely used due to the high computational cost of their implementation. In this work we analyze iterative parallel image reconstruction with the Portable Extensive Toolkit for Scientific computation (PETSc).

  19. Fast, Distributed Algorithms in Deep Networks

    DTIC Science & Technology

    2016-05-11

    Professor Gavin W. Taylor Computer Science Department (signature) (date) Acceptance for the Trident Scholar Committee Professor Maria J. Schroeder... Science Department who have been absolutely instrumental in making my four years at the Academy the incredible experience it has been. I came to the...Academy knowing nothing about computer science and little about programming, and each of you has helped fuel a passion that I will carry with me the

  20. Calling by concluding sentinels: coordinating cooperation or revealing risk?

    PubMed

    Hollén, Linda I; Bell, Matthew B V; Russell, Alexis; Niven, Fraser; Ridley, Amanda R; Radford, Andrew N

    2011-01-01

    Efficient cooperation requires effective coordination of individual contributions to the cooperative behaviour. Most social birds and mammals involved in cooperation produce a range of vocalisations, which may be important in regulating both individual contributions and the combined group effort. Here we investigate the role of a specific call in regulating cooperative sentinel behaviour in pied babblers (Turdoides bicolor). 'Fast-rate chuck' calls are often given by sentinels as they finish guard bouts and may potentially coordinate the rotation of individuals as sentinels, minimising time without a sentinel, or may signal the presence or absence of predators, regulating the onset of the subsequent sentinel bout. We ask (i) when fast-rate chuck calls are given and (ii) what effect they have on the interval between sentinel bouts. Contrary to expectation, we find little evidence that these calls are involved in regulating the pied babbler sentinel system: observations revealed that their utterance is influenced only marginally by wind conditions and not at all by habitat, while observations and experimental playback showed that the giving of these calls has no effect on inter-bout interval. We conclude that pied babblers do not seem to call at the end of a sentinel bout to maximise the efficiency of this cooperative act, but may use vocalisations at this stage to influence more individually driven behaviours.

  1. Fast valve

    DOEpatents

    Van Dyke, W.J.

    1992-04-07

    A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing. 4 figs.

  2. Fast valve

    DOEpatents

    Van Dyke, William J.

    1992-01-01

    A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing.

  3. A New Algorithm Using the Non-dominated Tree to improve Non-dominated Sorting.

    PubMed

    Gustavsson, Patrik; Syberfeldt, Anna

    2017-01-19

    Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This paper presents a new, more efficient, algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the paper, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.

  4. Acoustic signal detection of manatee calls

    NASA Astrophysics Data System (ADS)

    Niezrecki, Christopher; Phillips, Richard; Meyer, Michael; Beusse, Diedrich O.

    2003-04-01

    The West Indian manatee (trichechus manatus latirostris) has become endangered partly because of a growing number of collisions with boats. A system to warn boaters of the presence of manatees, that can signal to boaters that manatees are present in the immediate vicinity, could potentially reduce these boat collisions. In order to identify the presence of manatees, acoustic methods are employed. Within this paper, three different detection algorithms are used to detect the calls of the West Indian manatee. The detection systems are tested in the laboratory using simulated manatee vocalizations from an audio compact disc. The detection method that provides the best overall performance is able to correctly identify ~=96% of the manatee vocalizations. However the system also results in a false positive rate of ~=16%. The results of this work may ultimately lead to the development of a manatee warning system that can warn boaters of the presence of manatees.

  5. An Evaluation Framework for CALL

    ERIC Educational Resources Information Center

    McMurry, Benjamin L.; Williams, David Dwayne; Rich, Peter J.; Hartshorn, K. James

    2016-01-01

    Searching prestigious Computer-assisted Language Learning (CALL) journals for references to key publications and authors in the field of evaluation yields a short list. The "American Journal of Evaluation"--the flagship journal of the American Evaluation Association--is only cited once in both the "CALICO Journal and Language…

  6. Close Call: Breaking the Rules.

    ERIC Educational Resources Information Center

    Journal of Adventure Education and Outdoor Leadership, 1993

    1993-01-01

    Contrary to a rule to never teach students to lead climb, an instructor taught several youth to lead climb at a parent's request. These students planned to pursue rock climbing on their own after they left school, and preparing them was deemed a safety precaution. Analysis of this "close call" offers guidelines for introducing students…

  7. Formative Considerations Using Integrative CALL.

    ERIC Educational Resources Information Center

    Callahan, Philip; Shaver, Peter

    2001-01-01

    Addresses technical and learning issues relating to a formative implementation of a computer assisted language learning (CALL) browser-based intermediate Russian program. Instruction took place through a distance education implementation and in a grouped classroom using a local-area network. Learners indicated the software was clear, motivating,…

  8. On FastMap and the convex hull of multivariate data: toward fast and robust dimension reduction.

    PubMed

    Ostrouchov, George; Samatova, Nagiza F

    2005-08-01

    FastMap is a dimension reduction technique that operates on distances between objects. Although only distances are used, implicitly the technique assumes that the objects are points in a p-dimensional Euclidean space. It selects a sequence of k < or = p orthogonal axes defined by distant pairs of points (called pivots) and computes the projection of the points onto the orthogonal axes. We show that FastMap uses only the outer envelope of a data set. Pivots are taken from the faces, usually vertices, of the convex hull of the data points in the original implicit Euclidean space. This provides a bridge to results in robust statistics, where the convex hull is used as a tool in multivariate outlier detection and in robust estimation methods. The connection sheds new light on the properties of FastMap, particularly its sensitivity to outliers, and provides an opportunity for a new class of dimension reduction algorithms, RobustMaps, that retain the speed of FastMap and exploit ideas in robust statistics.

  9. Project FAST.

    ERIC Educational Resources Information Center

    Essexville-Hampton Public Schools, MI.

    Described are components of Project FAST (Functional Analysis Systems Training) a nationally validated project to provide more effective educational and support services to learning disordered children and their regular elementary classroom teachers. The program is seen to be based on a series of modules of delivery systems ranging from mainstream…

  10. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT

  11. Call for improving air quality

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2013-01-01

    The European Environmental Bureau (EEB), a federation of citizen organizations, has called for stricter policies in Europe to protect human health and the environment. "Air pollution emanates from sources all around us, be they cars, industrial plants, shipping, agriculture, or waste. The [European Union] must propose ambitious legislation to address all of these sources if it is to tackle the grave public health consequences of air pollution," EEB secretary general Jeremy Wates said on 8 January.

  12. Leveraging Call Center Logs for Customer Behavior Prediction

    NASA Astrophysics Data System (ADS)

    Parvathy, Anju G.; Vasudevan, Bintu G.; Kumar, Abhishek; Balakrishnan, Rajesh

    Most major businesses use business process outsourcing for performing a process or a part of a process including financial services like mortgage processing, loan origination, finance and accounting and transaction processing. Call centers are used for the purpose of receiving and transmitting a large volume of requests through outbound and inbound calls to customers on behalf of a business. In this paper we deal specifically with the call centers notes from banks. Banks as financial institutions provide loans to non-financial businesses and individuals. Their call centers act as the nuclei of their client service operations and log the transactions between the customer and the bank. This crucial conversation or information can be exploited for predicting a customer’s behavior which will in turn help these businesses to decide on the next action to be taken. Thus the banks save considerable time and effort in tracking delinquent customers to ensure minimum subsequent defaulters. Majority of the time the call center notes are very concise and brief and often the notes are misspelled and use many domain specific acronyms. In this paper we introduce a novel domain specific spelling correction algorithm which corrects the misspelled words in the call center logs to meaningful ones. We also discuss a procedure that builds the behavioral history sequences for the customers by categorizing the logs into one of the predefined behavioral states. We then describe a pattern based predictive algorithm that uses temporal behavioral patterns mined from these sequences to predict the customer’s next behavioral state.

  13. Uni10: an open-source library for tensor network algorithms

    NASA Astrophysics Data System (ADS)

    Kao, Ying-Jer; Hsieh, Yun-Da; Chen, Pochung

    2015-09-01

    We present an object-oriented open-source library for developing tensor network algorithms written in C++ called Uni10. With Uni10, users can build a symmetric tensor from a collection of bonds, while the bonds are constructed from a list of quantum numbers associated with different quantum states. It is easy to label and permute the indices of the tensors and access a block associated with a particular quantum number. Furthermore a network class is used to describe arbitrary tensor network structure and to perform network contractions efficiently. We give an overview of the basic structure of the library and the hierarchy of the classes. We present examples of the construction of a spin-1 Heisenberg Hamiltonian and the implementation of the tensor renormalization group algorithm to illustrate the basic usage of the library. The library described here is particularly well suited to explore and fast prototype novel tensor network algorithms and to implement highly efficient codes for existing algorithms.

  14. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.

    PubMed

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.

  15. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU

    PubMed Central

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507

  16. OVarCall: Bayesian Mutation Calling Method Utilizing Overlapping Paired-End Reads.

    PubMed

    Moriyama, Takuya; Shiraishi, Yuichi; Chiba, Kenichi; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2017-03-01

    Detection of somatic mutations from tumor and matched normal sequencing data has become a standard approach in cancer research. Although a number of mutation callers have been developed, it is still difficult to detect mutations with low allele frequency even in exome sequencing. We expect that overlapping paired-end read information is effective for this purpose, but no mutation caller has modeled overlapping information statistically in a proper form in exome sequence data. Here, we develop a Bayesian hierarchical method, OVar- Call (https://github.com/takumorizo/OVarCall), where overlapping paired-end read information improves the accuracy of low allele frequency mutation detection. Firstly, we construct two generative models: one is for reads with somatic variants generated from tumor cells and the other is for reads that does not have somatic variants but potentially includes sequence errors. Secondly, we calculate marginal likelihood for each model using a variational Bayesian algorithm to compute Bayes factor for the detection of somatic mutations. We empirically evaluated the performance of OVarCall and confirmed its better performance than other existing methods.

  17. A new algorithm for attitude-independent magnetometer calibration

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Shuster, Malcolm D.

    1994-01-01

    A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.

  18. F2Dock: Fast Fourier Protein-Protein Docking

    PubMed Central

    Bajaj, Chandrajit; Chowdhury, Rezaul; Siddavanahalli, Vinay

    2009-01-01

    The functions of proteins is often realized through their mutual interactions. Determining a relative transformation for a pair of proteins and their conformations which form a stable complex, reproducible in nature, is known as docking. It is an important step in drug design, structure determination and understanding function and structure relationships. In this paper we extend our non-uniform fast Fourier transform docking algorithm to include an adaptive search phase (both translational and rotational) and thereby speed up its execution. We have also implemented a multithreaded version of the adaptive docking algorithm for even faster execution on multicore machines. We call this protein-protein docking code F2Dock (F2 = Fast Fourier). We have calibrated F2Dock based on an extensive experimental study on a list of benchmark complexes and conclude that F2Dock works very well in practice. Though all docking results reported in this paper use shape complementarity and Coulombic potential based scores only, F2Dock is structured to incorporate Lennard-Jones potential and re-ranking docking solutions based on desolvation energy. PMID:21071796

  19. Fast and flexible interpolation via PUM with applications in population dynamics

    NASA Astrophysics Data System (ADS)

    Cavoretto, Roberto; De Rossi, Alessandra; Perracchione, Emma

    2016-06-01

    In this paper a new fast and flexible interpolation tool is shown. The Partition of Unity Method (PUM) is performed using Radial Basis Functions (RBFs) as local approximants. In particular, we present a new space-partitioning data structure extremely useful in applications because of its independence from the problem geometry. An application of such algorithm, in the context of wild herbivores in forests, shows that the ecosystem of the considered natural park is in a very delicate situation, for which the animal population could become extinguished. The determination of the so-called sensitivity surfaces, obtained with the new versatile partitioning structure, indicates some possible preventive measures to the park administrators.

  20. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  1. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  2. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  3. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  4. RH+: A Hybrid Localization Algorithm for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Basaran, Can; Baydere, Sebnem; Kucuk, Gurhan

    Today, localization of nodes in Wireless Sensor Networks (WSNs) is a challenging problem. Especially, it is almost impossible to guarantee that one algorithm giving optimal results for one topology will give optimal results for any other random topology. In this study, we propose a centralized, range- and anchor-based, hybrid algorithm called RH+ that aims to combine the powerful features of two orthogonal techniques: Classical Multi-Dimensional Scaling (CMDS) and Particle Spring Optimization (PSO). As a result, we find that our hybrid approach gives a fast-converging solution which is resilient to range-errors and very robust to topology changes. Across all topologies we studied, the average estimation error is less than 0.5m. when the average node density is 10 and only 2.5% of the nodes are beacons.

  5. A novel algorithm with differential evolution and coral reef optimization for extreme learning machine training.

    PubMed

    Yang, Zhiyong; Zhang, Taohong; Zhang, Dezheng

    2016-02-01

    Extreme learning machine (ELM) is a novel and fast learning method to train single layer feed-forward networks. However due to the demand for larger number of hidden neurons, the prediction speed of ELM is not fast enough. An evolutionary based ELM with differential evolution (DE) has been proposed to reduce the prediction time of original ELM. But it may still get stuck at local optima. In this paper, a novel algorithm hybridizing DE and metaheuristic coral reef optimization (CRO), which is called differential evolution coral reef optimization (DECRO), is proposed to balance the explorative power and exploitive power to reach better performance. The thought and the implement of DECRO algorithm are discussed in this article with detail. DE, CRO and DECRO are applied to ELM training respectively. Experimental results show that DECRO-ELM can reduce the prediction time of original ELM, and obtain better performance for training ELM than both DE and CRO.

  6. CRBLASTER: a fast parallel-processing program for cosmic ray rejection

    NASA Astrophysics Data System (ADS)

    Mighell, Kenneth J.

    2008-08-01

    Many astronomical image-analysis programs are based on algorithms that can be described as being embarrassingly parallel, where the analysis of one subimage generally does not affect the analysis of another subimage. Yet few parallel-processing astrophysical image-analysis programs exist that can easily take full advantage of todays fast multi-core servers costing a few thousands of dollars. A major reason for the shortage of state-of-the-art parallel-processing astrophysical image-analysis codes is that the writing of parallel codes has been perceived to be difficult. I describe a new fast parallel-processing image-analysis program called crblaster which does cosmic ray rejection using van Dokkum's L.A.Cosmic algorithm. crblaster is written in C using the industry standard Message Passing Interface (MPI) library. Processing a single 800×800 HST WFPC2 image takes 1.87 seconds using 4 processes on an Apple Xserve with two dual-core 3.0-GHz Intel Xeons; the efficiency of the program running with the 4 processors is 82%. The code can be used as a software framework for easy development of parallel-processing image-anlaysis programs using embarrassing parallel algorithms; the biggest required modification is the replacement of the core image processing function with an alternative image-analysis function based on a single-processor algorithm. I describe the design, implementation and performance of the program.

  7. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    NASA Astrophysics Data System (ADS)

    Rolland, Joran; Simonnet, Eric

    2015-02-01

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection-mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  8. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  9. Progressive band processing of fast iterative pixel purity index

    NASA Astrophysics Data System (ADS)

    Li, Yao; Chang, Chein-I.

    2016-05-01

    Fast Iterative Pixel Purity Index (FIPPI) was previously developed to address two major issues arising in PPI which are the use of skewers whose number must be determined by a priori and inconsistent final results which cannot be reproduced. Recently, a new concept has been developed for hyperspectral data communication according to Band SeQuential (BSQ) acquisition format in such a way that bands can be collected band by band. By virtue of BSQ users are able to develop Progressive Band Processing (PBP) for hyperspectral imaging algorithms so that data analysts can observe progressive profiles of inter-band changes among bands. Its advantages have been justified in several applications, anomaly detection, constrained energy minimization, automatic target generation process, orthogonal subspace projection, PPI, etc. This paper further extends PBP to FIPPI. The idea to implement PBP-FIPPI is to use two loops specified by skewers and bands to process FIPPI. Depending upon which one is implemented in the outer loop two different versions of PBP-FIPPI can be designed. When the outer loop is iterated band by band, it is called to be called Progressive Band Processing of FIPPI (PBP-FIPPI). When the outer loop is iterated by growing skewers, it is called Progressive Skewer Processing of FIPPI (PSP-FIPPI). Interestingly, both versions provide different insights into the design of FIPPI but produce close results.

  10. A method for calling copy number polymorphism using haplotypes

    PubMed Central

    Ho Jang, Gun; Christie, Jason D.; Feng, Rui

    2013-01-01

    Single nucleotide polymorphism (SNP) and copy number variation (CNV) are both widespread characteristic of the human genome, but are often called separately on common genotyping platforms. To capture integrated SNP and CNV information, methods have been developed for calling allelic specific copy numbers or so called copy number polymorphism (CNP), using limited inter-marker correlation. In this paper, we proposed a haplotype-based maximum likelihood method to call CNP, which takes advantage of the valuable multi-locus linkage disequilibrium (LD) information in the population. We also developed a computationally efficient algorithm to estimate haplotype frequencies and optimize individual CNP calls iteratively, even at presence of missing data. Through simulations, we demonstrated our model is more sensitive and accurate in detecting various CNV regions, compared with commonly-used CNV calling methods including PennCNV, another hidden Markov model (HMM) using CNP, a scan statistic, segCNV, and cnvHap. Our method often performs better in the regions with higher LD, in longer CNV regions, and in common CNV than the opposite. We implemented our method on the genotypes of 90 HapMap CEU samples and 23 patients with acute lung injury (ALI). For each ALI patient the genotyping was performed twice. The CNPs from our method show good consistency and accuracy comparable to others. PMID:24069028

  11. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  12. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over

  13. Matrices of small Toeplitz rank, certain representations of the solution to an unstable system of linear equations with Toeplitz coefficient matrices, and related fast algorithms for solving such systems

    NASA Astrophysics Data System (ADS)

    Gel'fgat, V. I.

    2014-11-01

    Formulas for inverting regularized systems of linear equations whose coefficient matrices are complex, Toeplitz, and singular or nearly singular are derived. They make it possible to develop economical algorithms for solving such systems in mass calculations.

  14. Join-Graph Propagation Algorithms

    PubMed Central

    Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina

    2010-01-01

    The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057

  15. Implementation and parallelization of fast matrix multiplication for a fast Legendre transform

    SciTech Connect

    Chen, Wentao

    1993-09-01

    An algorithm was presented by Alpert and Rokhlin for the rapid evaluation of Legendre transforms. The fast algorithm can be expressed as a matrix-vector product followed by a fast cosine transform. Using the Chebyshev expansion to approximate the entries of the matrix and exchanging the order of summations reduces the time complexity of computation from O(n{sup 2}) to O(n log n), where n is the size of the input vector. Our work has been focused on the implementation and the parallelization of the fast algorithm of matrix-vector product. Results have shown the expected performance of the algorithm. Precision problems which arise as n becomes large can be resolved by doubling the precision of the calculation.

  16. New Algorithms and Sparse Regularization for Synthetic Aperture Radar Imaging

    DTIC Science & Technology

    2015-10-26

    Demanet Department of Mathematics Massachusetts Institute of Technology. • Grant title: New Algorithms and Sparse Regularization for Synthetic Aperture...statistical analysis of one such method, the so-called MUSIC algorithm (multiple signal classification). We have a publication that mathematically justifies...called MUSIC algorithm (multiple signal classification). We have a publication that mathematically justifies the scaling of the phase transition

  17. 76 FR 17934 - Infrastructure Protection Data Call

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-31

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HOMELAND SECURITY Infrastructure Protection Data Call AGENCY: National Protection and Programs Directorate, DHS...: Infrastructure Protection Data Call. OMB Number: 1670-NEW. Frequency: On occasion. Affected Public:...

  18. Potential Paradigms and Possible Problems for CALL.

    ERIC Educational Resources Information Center

    Phillips, Martin

    1987-01-01

    Describes three models of CALL (computer assisted language learning) activity--games, the expert system, and the prosthetic approaches. A case is made for CALL development within a more instrumental view of the role of computers. (Author/CB)

  19. 78 FR 76218 - Rural Call Completion

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-17

    ... interest ramifications, causing rural businesses to lose customers, cutting families off from their... the static or dynamic selection of the path for a long-distance call based on the called number of...

  20. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.

    PubMed

    Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.